output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
The mono documentation is not very clear on exactly how to install the packages they provide: install the packages "mono" ... while there is no single package with that name. One of the multitude of packages you have to download probably provides mono as a virtual package, providing the dependency your installation attempt complains about. Installing all packages at once should solve the issue: cd /home/martin/Downloads rpm -ivh *.rpm
I'm trying to install mono on my open suse linux. I downloaded the latest rpm packages from the mono page and tried to install the mono-core package with the --test attribute rpm -ivh --test /home/martin/Downloads/mono-core-3.2.3-0.x86_64.rpmbut got an error saying: error: Failed dependencies: mono(System.ComponentModel.Composition) = 4.0.0.0 is needed by mono-core-3.2.3-0.x86_64 mono(System.ComponentModel.DataAnnotations) = 4.0.0.0 is needed by mono-core-3.2.3-0.x86_64 mono(System.Data) = 4.0.0.0 is needed by mono-core-3.2.3-0.x86_64 mono(System.Runtime.Serialization) = 4.0.0.0 is needed by mono-core-3.2.3-0.x86_64 mono(System.ServiceModel) = 4.0.0.0 is needed by mono-core-3.2.3-0.x86_64I understand that these dependencies should be installed first, but there isn't any just 'mono' package. Am I missing something here ?
Installing mono on open suse
Check the Mono:Factory project in OBS - it's not considered stable yet. You can also get there from the opensuse package directory - click "select other versions" then "show unstable versions".
I have mono 2.10.9 installed on a VM with openSuse version 12.2. I'm trying to find a distribution for the latest v3 Release of Mono which I can use to upgrade the version of mono that I have installed. I cannot find any distribution of Mono 3 for openSuse and I'm wondering how to go about upgrading to Mono 3 (if it's possible). Please note I come from mainly a Windows background and while I have used Unix in the past (SCO Unix), the Unix world has changed a lot since I last was involved with it. I'm sure I can get mono to build on openSuse, but it's the number of other dependencies that are required, their versions etc; and I'm just not comfortable enough with the environment yet to get a grip on it all. I hope that someone out there can point in the right direction.
How to upgrade to “Mono v3” on openSuse 12.2
The apt-cache utility has the ability to search for packages. Currently, apache modules are in the form of libapache2-mod-<module name>. On my Debian system: $ apt-cache search libapache2-mod-mono libapache2-mod-mono - Apache module for running ASP.NET applications on Mono mono-apache-server1 - ASP.NET 1.1 backend for mod_mono Apache module mono-apache-server2 - ASP.NET 2.0 backend for mod_mono2 Apache moduleYou can view the full package descriptions using apt-cache show <package>. After a new Apache module is installed in Debian, it still needs to be enabled: a2enmod mono /etc/init.d/apache2 restartSee the apt-cache manpage for more information.
I'm trying to install mod_mono on Linux Mint Maya/Cinnamon. I can't seem to find any tutorials or anything on how to do this. So far, XSP and MonoDevelop seem to be working. I created an MVC solution, started it and it seemed to work fine. I am also new to apache. This is a learning experience for me. :) Can someone help me? Is there some "sudo apt-get mod_mono_for_apache" command that I can execute?
How to install mod_mono on Linux Mint?
Go to this page at opensuse.org and click "1-Click Install" button on mono-complete-2.8.2 meta package. Then all your loop dependencies will be solved automatically by YaST manager. It is a usual user-friendly way to install packages on openSuSE.
I have a virtual machine running of openSuse 11.2 that has mono 2.6.4, I use this VM as a test server to test asp.net applications under Apache mod_mono. I wanted to upgrade (in the same virtual machine) to mono 2.8.2. I downloaded several rpm files from http://ftp.novell.com/pub/mono/download-stable/openSUSE_11.2/i586/ but I'm in a dependency "loop", don't know which package to install in the correct order... (Did I mention that I know very little of suse?) Edit: Is it possible to find a way to upgrade it without network connectivity? Thanks!
How to upgrade mono on openSuse
It needed to have another instance of mono already installed, so apt-get install mono-gmcs did the job. Then, a new error appeared, and the only solution seemed to be using the GitHub package: git clone git://github.com/mono/mono.git cd mono ./autogen.sh --prefix=/usr/local make make installNow I have installed the latest version of mono. NOTE: now I am not able to run any mono program, e.g. monodevelop. I couldn't find the solution to the exception, so now I'm downgrading mono to an old & compatible version. PS: If you find the solution, please leave a comment :)
How can I install the latest release of Mono in Ubuntu Saucy? In order to develop in a free OS, I need to setup Mono (that now supports .NET 4.5). This is what I did:Download and uncompress mono-mono-3.2.5 Run ./autogen.sh --prefix=/usr/local (finished ok) Run make, but it exit with error checking dependencies:terminal-output: .... mkdir -p -- build/deps make[6]: gmcs: Command not found make[6]: *** [build/deps/basic-profile-check.exe] Error 127 *** The compiler 'gmcs' doesn't appear to be usable. *** You need Mono version 2.4 or better installed to build MCS *** Check mono README for information on how to bootstrap a Mono installation. make[5]: *** [do-profile-check] Error 1 make[4]: *** [profile-do--basic--all] Error 2 make[3]: *** [profiles-do--all] Error 2 make[2]: *** [all-local] Error 2 make[2]: Leaving directory `/home/hogar/Software/mono-mono-3.2.5/runtime' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/hogar/Software/mono-mono-3.2.5' make: *** [all] Error 2But to install mono-gmcs, Ubuntu will also install mono-runtime v2.10.8.1-5ubuntu2. And that is exactly what I'm trying to not install, an old version of mono. This is confusing me, what should I do?
How to install Mono v3+ in Ubuntu?
In order to install dotnet-cli, you could use yaourt that helps to build and install AUR packages. You can proceed like this :Add to /etc/pacman.conf [archlinuxfr] SigLevel = Never Server = http://repo.archlinux.fr/$archUpdate pacman and install yaourt: sudo pacman -Sy yaourtNext install dotnet-cli using: yaourt dotnet-cliAnd follow the instruction of yaourt.
I am using Apricity OS (based on Arch Linux). I want to install Dot Net Core. What is the command for installing it by pacman.
How do I install Dot Net Core on Arch Linux
2 days ago It was working, today I am also having same problem. But I think hash file is incorrect @ mono-project site. If you check file size or create hash file for Packages file you can see that it is not same as hash file. binaries http://origin-download.mono-project.com/repo/debian/dists/wheezy/main/binary-amd64/ file size and hash list of packages: http://origin-download.mono-project.com/repo/debian/dists/wheezy/Release Edit: they updated checksum file and also binaries couple of minutes ago and they match now. Try again, it might work now.
I am following the mono installation tutorial. First I add repo to list, then I import key and finally I try to update the repos. On last step I get error: W: Failed to fetch http://download.mono-project.com/repo/debian/dists/wheezy/main/binary-amd64/Packages Hash Sum mismatchE: Some index files failed to download. They have been ignored, or old ones used instead.I tried to fix this by running rm -rf /var/lib/apt/lists/* but the error remains. Can you help me understand why this error happens and if possible a resolution/workaround? I would prefer not to compile from source nor to use the version in official repository. I installed debian in a VM using debian-7.6.0-amd64-DVD-1.iso
Why do I get Hash Sum mismatch error when trying to install mono on debian
It looks like monodevelop 2.95 is a development release, so won't find it in the repository. Monodevelop 3.0 has just been released (May 14, 2012). I think this is the version you need. You can download it from MonoDevelop web page. No prebuilt package is available for Ubuntu/Mint right now, so you best option is to build it from the source (using the previous version of mono presumably). If you are new to building packages from source, it is actually a straightforward process (once you have all the build tools/libraries installed):download untar ./configure make sudo make installAlternatively you can wait for a prebuilt package.
I installed monodevelop from the repositories because I want to try the mono-D plugin, but when I go to the Gallery in Add-in Manager, it says "No add-ins found" I tried adding http://mono-d.alexanderbothe.com/ as a repository, but it requires monodevelop version 2.95 and the one in the repository is version 2.6: The selected add-ins can't be installed because there are dependency conflicts The package 'Ide v2.9.5' could not be found in any repository
how to install monodevelop with d language add-in on Mint 12
I just switched to KeePassXC 2.2.0 (oldstable) and it works fine in the above regard, and was able to open existing database files from KeePass2. This doesn't really answer the question as to what caused the above, but it's a useful solution, at least.
I haven't updated KeyPass 2.x (I'm still on 2.32), so what could have broken this program on Linux Mint 18.1 x64? It used to be that normal Ctrl+[letter] shortcuts like Ctrl+C to copy, Ctrl+U to open URL in browser, etc. worked fine on this system. Now none of them do. I have to right-click and use context menu options. This persists after a restart. Unfortunately I'm not sure what updated that might have broken this. It seems the Ctrl part is just being ignored entirely: In any text box within KeyPass where I try to paste with Ctrl+V, it instead types the letter v into the box. Aside from KeyPass, everywhere else in the system my Ctrl key works just fine for copy/paste/undo/select all/etc. The only exception is that I also recently experienced this in pgAdminIII, but restarting the program (or perhaps my whole computer--not sure which it was) fixed it, because the problem did not persist after that. KeyPass remains broken between cold boots. What components are at play here? How can I diagnose such a problem? KeePass 2.x seems to be based on Mono, but I don't have mono-complete installed--I installed from the repo mentioned at https://sourceforge.net/p/keepass/discussion/329220/thread/17d1bd26/#4a47/2783 (IIRC--it was months ago now, and it's been working since then until somewhat recently. But I did seem to have that repo already there when I tried to add it again just now, so unless it didn't work and I installed some other way without cleaning up the repo, that's the version I have. From the rest of that thread I gather that the version I have has whatever it needs from Mono already compiled in?)For lack of other ideas, after all the above, I upgraded to KeePass 2.36. It still has this behaviour. Also it doesn't matter if I use left or right Ctrl, or even both at the same time.
Why do [Ctrl]+[letter] shortcuts suddenly not work anymore in KeePass 2.x under Linux Mint, while most other programs are fine? [closed]
Use the GNU debugger, gdb, or something similar.
I would like to find a Ubuntu Linux 16.04 systen command similar to strace to find out why my C++ program , ServiceController.exe , which [execle ("/usr/lib/mono/4.5/mono-service","/usr/lib/mono/4.5/mono-service", "./Debug/ComputationalImageClientServer.exe", 0, char const* EnvironmentPtr)]mysteriously stop running after 90 seconds * where * ComputationalImageClientServer.exe and ComputatationalImageClientServer.exe are C#/.NET 4.5 executables In contrast, when I run /usr/lib/mono/4.5/mono-service.exe ./Debug/ComputatationalImageVideoServer.exe" at the command prompt,it runs continually for 24 hours by 7 days at least. Why cannot the first example run continuously 24X7? How might I diagnose, debug and fix this error? open("Delaware_Client_Server.exe", O_RDONLY) = 3 pipe2([4, 5], O_CLOEXEC) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f743e4dca10) = 3509 close(5) = 0 fcntl(4, F_SETFD, 0) = 0 fstat(4, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 read(4, "", 4096) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3509, si_uid=1000, si_status=1, si_utime=0, si_stime=0} --- close(4) = 0 wait4(3509, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 3509 write(1, "\n", 1) = 1 write(1, "Process returned 256\n", 21) = 21
Why does Ubuntu 16.04 execle of a specfic C# image halt after 90 seconds while others run 24X7?
Although the process used to find it would make you think it is, it isn't actually a repository, it is in fact just a simple file-share. In this situation, your best bet would be to remove all your openSUSE-provided mono packages (zypper rm), lock them (zypper al), and install (zypper in) the packages from the mono-project file-share. Alternatively, mono in OpenSUSE Factory is now at the next major version (3.0.2), so you could also get your packages from there. Although, I'd recommend you don't - Factory isn't suitable for daily use, and there is a possibility packages from Factory may pull in other incompatible packages and cause serious issues.
Question: I want to install mono3 from here: http://download.mono-project.com/archive/3.0.3/linux/x64/ How can I add this link to the appropriate OpenSuse package installation program ? I tried zypper ar http://download.mono-project.com/archive/3.0.3/linux/x64/ mono3 but that yields "Repository type cannot be determined" Repository mono3 is invalid.
Adding openSuSe mono repository?
Currently, Debian testing is in a freeze state. This means that new uploads must be approved by the release team, and generally must fix RC (release critical) bugs. It is very rare for the release team to accept new upstream releases (rather than patches specifically for RC bugs) after the freeze. So the answer to this question is after the following has occurred:The Mono team packages and uploads Mono 3.0 to unstable Wheezy is released as stable and Jesse becomes the new testing 2-10 days have passed since the upload to unstable (depending on urgency set on the package).In addition to this, if a RC bug is filed against the unstable package before it migrates to testing, the RC bug will bock migration. The severity of the bug will need to be downgraded, or a new version of the package which fixes the RC bug will need to be uploaded. Outside of a time in which testing is frozen, the answer to your question is "2-10 days after the maintainer or team has time to do the work and upload to unstable". Maintainers or teams own packages in Debian, and they are all volunteers, so it is really dependent on the individuals involved. Unfortunately, I do not know of any direct sources where this process is clearly laid out. I have this knowledge from years of working with OS and lurking around the development community.
Mono 3.0 has been released yesterday. I am really excited by this release and am curious to know when this will be available in Debian testing (Wheezy). Is there a standard timeline set by Debian project for when the newest release of a software will be made available into Testing or in general any of the branches, except Stable?
How soon do new releases get packaged into Debian Testing?
That package doesn't exist in the current Debian stable (Squeeze), but it's in testing (Wheezy). You could compile it yourself or try to find a backport, but the easiest might be to just install Wheezy, which is getting close to becoming the new stable anyway.
I installed mod-mono on debian via apt-get install mono-apache-server2... But when i do apt-get install mono-apache-server4 it gives me:E: Unable to locate package mono-apache-server4Anyone know how I can get mono-apache-server4 on there?
How to get mono-apache-server4 on debian
The warning is generated because you have two SCRIPT_FILENAME lines in your /etc/nginx/fastcgi_params file. The original value and the new value you added. You should comment out the old value to suppress the warning message. The error is generated because the syntax of your fastcgi-mono-server4 command invocation is wrong. The /applications element should probably be something like: /applications=localhost:/:/var/www/API/See this document for further details.
I have a Web API that is using .NET 4.0 which I'm now trying to deploy on Debian. I've followed a few tutorials on how to do this, e.g. Running ASP.net Web API Services under Linux and OSx. $ /etc/nginx/sites-available/default:server { listen 80; root /var/www/API/; index index.html index.htm default.aspx Default.aspx index.cshtml Index.cshtml; server_name localhost; location / { fastcgi_index Index.cshtml; fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; } }I added the following to /etc/nginx/fastcgi_params: fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;Then I started nginx and the mono server: # /etc/init.d/nginx start # fastcgi-mono-server4 /applications=/localhost:/var/www/API/ /socket=tcp:127.0.0.1:9000 /verbose=TrueThen when I try to access the site the log gives warnings and errors that I've failed to find a solution for: Warning: Duplicate name, SCRIPT_FILENAME, encountered. Overwriting existing value. Error: No application defined for: localhost:80/Index.cshtml
Encountering issues when trying to host an ASP.NET Web Api on Debian using Mono and Nginx
I need runtime v2.0.50727: $ ikdasm KEEProg_03c.exe | head -n 2 // Metadata version: v2.0.50727This answer: https://stackoverflow.com/questions/33508922/mono-on-macosx-the-runtime-version-supported-by-this-application-is-unavailab/33517383#33517383 suggests: $ mono --runtime=2.0 ./KEEProg_03c.exe which works.
I use KEEProg_24xxx_03c to control an EEPROM programmer through USB. This has worked great for years. However, now it complains: $ mono ./KEEProg_03c.exe WARNING: The runtime version supported by this application is unavailable. Using default runtime: v4.0.30319Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. File name: 'System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' [ERROR] FATAL UNHANDLED EXCEPTION: System.IO.FileNotFoundException: Could not load file or assembly 'System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. File name: 'System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'It is very likely that I have upgraded mono since I used it last time. It sounds as if mono does not support this old binary. I do not have the source code, so I cannot link that to a newer version. wine starts the program, but USB access does not work ("About" complains: Device not found). What are my options? Is there a GNU/Linux tool that can control my 24xxx? Can I downgrade mono or install the old library version? KEEProg connects via USB. The USB device is detected as: [1333363.114683] usb 3-2: new full-speed USB device number 33 using xhci_hcd [1333363.248418] usb 3-2: New USB device found, idVendor=0403, idProduct=6001 [1333363.248423] usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [1333363.248426] usb 3-2: Product: FT232R USB UART [1333363.248428] usb 3-2: Manufacturer: FTDI [1333363.248430] usb 3-2: SerialNumber: A700f2Je [1333363.250897] ftdi_sio 3-2:1.0: FTDI USB Serial Device converter detected [1333363.250948] usb 3-2: Detected FT232RL [1333363.251171] usb 3-2: FTDI USB Serial Device converter now attached to ttyUSB0The device looks very much like this one (I cannot find any visual differences): https://sigma-shop.com/product/31/usb-24xxx-i2c-e-eprom-programmer-microchip-atmel.html The code can be found here: https://info.kmtronic.com/software/KEEPROG/KEEProg_24xx/KEEProg_24xxx_03c.zip
24xxx programmer tool fails: Mono library downgrade? Other options?
The machine store keys and certificates are stored in the root user's home directory: /root/.config/.mono/However, when I chmod'd the subdirectories there to provide access to the user in question, it didn't work, I still got permission denied. Instead, I'm taking dave_thompson_085's tip. I gave the user sudo access to mono's certmgr utility and now it's working well.
I am running NUnit tests on Ubuntu via mono. They require the certmgr tool to install certificates and keys that the tests utilize. The user cert/key I can install fine, but the tests fall over if the CA certificate is not installed in the MACHINE rather than LOCAL USER Trust store. However, the Machine store requires sudo to install certificates. The user running the tests, I don't want to give sudo privileges to. Can I reduce the privileges required to install certificates to the machine store?
Can I reduce the privileges level on mono's certificate machine store?
Those emulators are not perfect. Try the application you want to use, but it may take some tweaking to get it working correctly. See WineHQ's support page and Mono's Documentation for help to get them working.
Do I need to worry what version of OS software runs on and what the software requirements are? Can I can install any Windows software on Mono or Wine without worrying about compatibility?
Do I need to worry about software compatibility at all if I use Mono and Wine?
You most definitely have either Wine or/and Mono installed via a package manager and along with it a special configuration file which tells the kernel what to do once the user launches an exe file. More on it here: https://en.wikipedia.org/wiki/Binfmt_misc
I just discovered something really surprising. I can compile a simple C# hello world on my windows computer using csc (from Visual Studio), copy the resulting exe file to my Linux computer, and execute it with mono helloworld.exe. So far, everything makes sense to me: according to this SO post, on Windows, helloworld.exe is basically just a trick that ends up starting the C# runtime, and the CIL bytecode is just read from some data section later on in the exe file. Likewise, I imagine that, on Linux, doing mono helloworld.exe just starts the C# runtime and directly reads the bytecode without bothering with the exe trickery. For posterity, here is the source, which I got from Charles Petzold's excellent free C# book: //--------------------------------------------- // FirstProgram.cs (c) 2006 by Charles Petzold //--------------------------------------------- class FirstProgram { public static void Main() { System.Console.WriteLine("Hello, Microsoft .NET Framework!"); } }But here's where things get interesting: on Linux (uname -r on my machine gives 4.14.188-1-MANJARO) I can simply do ./helloworld.exe and it works! I started doing some sleuthing, and here are the first few lines from running strace ./helloworld.exe: execve("./hw.exe", ["./hw.exe"], 0x7fffae8e6070 /* 61 vars */) = 0 [ Process PID=3381 runs in 32 bit mode. ] brk(NULL) = 0x7eedb000 arch_prctl(0x3001 /* ARCH_??? */, 0xffb19948) = -1 EINVAL (Invalid argument) readlink("/proc/self/exe", "/usr/bin/wine", 4096) = 13 mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xf7faa000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/bin/../lib32/tls/i686/sse2/libwine.so.1", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat64("/usr/bin/../lib32/tls/i686/sse2", 0xffb18e00) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/bin/../lib32/tls/i686/libwine.so.1", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat64("/usr/bin/../lib32/tls/i686", 0xffb18e00) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/bin/../lib32/tls/sse2/libwine.so.1", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat64("/usr/bin/../lib32/tls/sse2", 0xffb18e00) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/bin/../lib32/tls/libwine.so.1", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat64("/usr/bin/../lib32/tls", 0xffb18e00) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/bin/../lib32/i686/sse2/libwine.so.1", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat64("/usr/bin/../lib32/i686/sse2", 0xffb18e00) = -1 ENOENT (No such file or directory)I would have expected some error saying "this file is not a valid executable", since I only expect the Linux program loader to understand ELF files instead of Windows's PE format. Instead, it seems that somehow the system is smart enough to start looking for wine (in the strace output, you can see it start to look for wine libraries, and because I've installed wine on my Linux machine, it does eventually find them later on). So what's going on? Is the execve call smart enough to try using wine if it detects a PE file, or is this something that bash is doing? Or is it something else entirely different?
How did Linux load this exe compiled with C#? [duplicate]
apt already tells you what you should try next:You might want to run 'apt --fix-broken install' to correct these.If that fails for some reason, or doesn't fix your issue, please update the question with the output of that command.
How can I fix this issue: (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ sudo apt install synaptic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: mono-devel : Depends: mono-roslyn (= 5.18.1.3-0xamarin3+ubuntu1804b1) but it is not going to be installed Depends: ca-certificates-mono (= 5.18.1.3-0xamarin3+ubuntu1804b1) but 5.18.0.240+dfsg-2ubuntu2 is to be installed Recommends: referenceassemblies-pcl but it is not going to be installed Recommends: msbuild but it is not going to be installed synaptic : Depends: libept1.5.0 but it is not going to be installed Depends: libxapian30 (>= 1.4.8~) but it is not going to be installed Recommends: libgtk2-perl (>= 1:1.130) but it is not going to be installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ sudo apt --fix-broken install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following additional packages will be installed: ca-certificates-mono mono-roslyn The following NEW packages will be installed: mono-roslyn The following packages will be upgraded: ca-certificates-mono 1 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. 149 not fully installed or removed. Need to get 0 B/5 600 kB of archives. After this operation, 21,6 MB of additional disk space will be used. Do you want to continue? [Y/n] Y (Reading database ... 312351 files and directories currently installed.) Preparing to unpack .../mono-roslyn_5.18.1.3-0xamarin3+ubuntu1804b1_all.deb ... Unpacking mono-roslyn (5.18.1.3-0xamarin3+ubuntu1804b1) ... dpkg: error processing archive /var/cache/apt/archives/mono-roslyn_5.18.1.3-0xamarin3+ubuntu1804b1_all.deb (--unpack): trying to overwrite '/usr/bin/csc', which is also in package chicken-bin 4.13.0-1 dpkg-deb: error: paste subprocess was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/mono-roslyn_5.18.1.3-0xamarin3+ubuntu1804b1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ I tried to install mono by jaakko@jaakko-GL553VW:~/Desktop/Programming$ sudo apt install apt-transport-https dirmngr Reading package lists... Done Building dependency tree Reading state information... Done dirmngr is already the newest version (2.2.12-1ubuntu3). apt-transport-https is already the newest version (1.8.0). You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: mono-devel : Depends: mono-roslyn (= 5.18.1.3-0xamarin3+ubuntu1804b1) but it is not going to be installed Depends: ca-certificates-mono (= 5.18.1.3-0xamarin3+ubuntu1804b1) but 5.18.0.240+dfsg-2ubuntu2 is to be installed Recommends: referenceassemblies-pcl but it is not going to be installed Recommends: msbuild but it is not going to be installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF Executing: /tmp/apt-key-gpghome.7lPVGXOSvM/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF gpg: key A6A19B38D3D831EF: 2 signatures not checked due to missing keys gpg: key A6A19B38D3D831EF: "Xamarin Public Jenkins (auto-signing) <[emailprotected]>" not changed gpg: Total number processed: 1 gpg: unchanged: 1 (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ echo "deb https://download.mono-project.com/repo/ubuntu vs-bionic main" | sudo tee /etc/apt/sources.list.d/mono-official-vs.list deb https://download.mono-project.com/repo/ubuntu vs-bionic main (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ sudo apt update Hit:1 http://fi.archive.ubuntu.com/ubuntu disco InRelease Hit:2 http://fi.archive.ubuntu.com/ubuntu disco-updates InRelease Hit:3 http://ppa.launchpad.net/teejee2008/ppa/ubuntu disco InRelease Hit:4 http://fi.archive.ubuntu.com/ubuntu disco-backports InRelease Hit:5 http://security.ubuntu.com/ubuntu disco-security InRelease Hit:6 https://download.mono-project.com/repo/ubuntu vs-bionic InRelease Reading package lists... Done Building dependency tree Reading state information... Done 27 packages can be upgraded. Run 'apt list --upgradable' to see them. (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$ sudo apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: mono-devel : Depends: mono-roslyn (= 5.18.1.3-0xamarin3+ubuntu1804b1) but it is not installed Depends: ca-certificates-mono (= 5.18.1.3-0xamarin3+ubuntu1804b1) but 5.18.0.240+dfsg-2ubuntu2 is installed Recommends: referenceassemblies-pcl but it is not installed Recommends: msbuild but it is not installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). (base) jaakko@jaakko-GL553VW:~/Desktop/Programming$
Installing mono messed something on my computer
It's under the directory that was the current working directory of the Mono process that executed the move.
I used MonoDevelop to write a small C#-program that moves directories and files from one location to another location, but I messed it up a bit. I used C#'s DirectoryInfo.MoveTo(path1, path2); to move the folder, but I forgot to specify the actual parent folder of path2. The initial situation was like this: I have a subfolder in the folder /home/waka/Downloads/folder/subfolder_Name_That_Is_Too_Long_For_My_Liking I tried moving/renaming that subfolder to simply /home/waka/Downloads/folder/subfolder, but didn't specify the /home/waka/Downloads/folder part and instead moved it like this: DirectoryInfo.MoveTo("/home/waka/Downloads/folder/subfolder_Name_That_Is_Too_Long_For_My_Liking", "subfolder"); So, my question is: Where did this folder end up? I can't use history | grep mv because I didn't use the mv command. Did I just delete the folder or is it still somewhere to be found? What I have tried: 1. Running fsck, but this warns me that on mounted devices I will damage the file system. 2. I tried simply reversing the blunder, but got a Directory not found exception.
Moved a directory to a non-existing location?
First you can confirm that you already have the mpm_prefork module by seeing that's shipped in the apache2 package in 16.04. You'll see a couple results for it if you do this: dpkg -L apache2 | grep fork /etc/apache2/mods-available/mpm_prefork.conf /etc/apache2/mods-available/mpm_prefork.loadNow check which MPM module is enabled, and you'lll see that the Event MPM module is enabled while the Prefork module is not: ls /etc/apache2/mods-enabled/mpm*It sounds like you want to disable the Event MPM module and enable the Prefork MPM module, which you can do with symlinks, and then restart Apache: sudo rm /etc/apache2/mods-enabled/mpm* sudo ln -s /etc/apache2/mods-available/*fork* /etc/apache2/mods-enabled/Perhaps then your "StartServer" directive will work as desired. The Event MPM server runs an event loop in a single process, so it doesn't all the extra processes. You also mentioned starting Apache and systemd. I recommend NOT starting Apache's http directly or with apache2ctl. ONLY control it through systemd for consistency. Here are some related systemd control commands, as examples: sudo systemctl start apache2 sudo systemctl stop apache2 sudo systemctl restart apache2You had more questions in your comments about setting up Mono, ASPX and multiple apps. You should ask those questions separately and be clear whether you intend to serve multiple apps on a single domain, or multiple apps on multiple domains. To make the Apache2 service starts at boot, run: sudo systemctl enable apache2
I would like to know how to start up k=10 Apache2 server processes upon Ubuntu 16.04 alpha release reboot. Yesterday, I read this URL, [https://rudd-o.com/linux-and-free-software/tuning-an-apache-server-in-5-minutes], which says to use Apache2 with the prefork.c module and set StartServers equal to 10 <IfModule prefork.c> StartServers 4 MinSpareServers 3 MaxSpareServers 10 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 10000 </IfModule>Unfortunately , my apache2 only has the event.c module. I tried upgrading apache2 to the prefork.c module with sudo apt-get install apache-mtm-prefork and the install error said no such package available because it may be outdated or obsolete. Then I tried sudo apt-get update and I received the same error message. My next step was to try to configure the apache2 event module with StartServers = 5 by changing /etc/httpd.conf and then restarting my systemd apache.service file and my ps -ef | grep -in "apache2" shows only 2 www-data apache2 processes and ps-ef | grep -in "mod" shows no mod-mono-server4 process. Furthermore, upon running apache2 at the bash shell command prompt it said syntax error APACHE2_LOCK_FILE environment variable missing. I discovered that APACHE2_LOCK_FILE is defined in my /etc/apache2/envvars file. The Ubuntu 16.04 apache2 man page says, "In general, apache2 should not be invoked directly, but rather should be invoked via /etc/init.d/apache2 or apache2ctl. The default Debian configuration requires environment variables that are defined in /etc/apache2/envvars and are not available if apache2 is started directly. However, apache2ctl can be used to pass arbitrary arguments to apache2." So I could use /etc/apache2/envars , I edited my systemd apache.service file to use ExecStart = /etc/init.d/apache2 start and rebooted my Lenovo ThinkStation Ubuntu 16.04 desktop and to no avail, I still got only 2 www-data apache2 processes and ps-ef | grep -in "mod" shows no mod-mono-server4 process when I ran ps -ef | grep -in "apache2". May I ask what I did wrong and how to fix it? Please suggest tests I can do. I know that many Ubuntu 16.04 alpha release users will soon complain about the same problem I experienced.
How to start up k=10 Apache2 server processes upon Ubuntu 16.04 alpha release reboot?
The C program performs the equivalent of this script: #!/bin/bash export LD_LIBRARY_PATH=. exec /usr/lib/mono/4.5/mono-service.exe Audio_Video_Recorder.exe --debug '>&' LOGCamster.txt echo "Oops!" >&2 exit 255Notice that the >& and LOGCamster.txt are passed as literal arguments to your command line. Specifically, >& is not interpreted by the shell to mean "attach stderr to stdout" because there is no shell handling your command line. The chances are that the program doesn't like the '>&' parameter it's being given and exits immediately. Setting LD_LIBRARY_PATH to ., opens up a potentially huge security hole. I really wouldn't do that if I were you.If you really need to do this from an executable then you can do one of two thingsRedirect stdout and stderr yourself. Here you'll need to close(1) and then open() to your log file. You can then close(2) and dup(1). After this just execve your program without trying to redirect the output anywhere - because it's already being redirected to the log file. Call a shell to interpret your command. Here you need three arguments: char *argv[] = { "/bin/sh", "-c", "mono-service.exe Audio_Recorder.exe --debug >& log.txt" }. However, if you're doing this you really might as well use a script, which is easier to write and more maintainable.
When we start in the mono-service mode on Ubuntu Linux 16.04 in the background, we would like to create and store a log file at wherever the user wants. I tried this URL, stackoverflow.com/questions/11024474/capture-mono-service-stdout-console-output where it says do this: mono-service2 myservice.exe -l:/var/run/test --debug > log.txtwhich does not work when I test the following C++ program: #include <unistd.h> // execv(), fork() #include <sys/types.h> // pid_t #include <sys/wait.h> // waitpid() #include <stdio.h>int main(int argc, char* argvp) { char *argv[] = { "/usr/lib/mono/4.5/mono-service.exe", "SmartCamXi_NVR_Recorder.exe", "--debug", "'>&'","/home/venkat/LOGCamster.txt", 0}; char *envp[] = { "LD_LIBRARY_PATH=/home/venkat/Debug", 0 }; execve(argv[0], &argv[0], envp); fprintf(stderr, "Oops!\n"); return -1;}because I observe no log file being created. How could I fix this error?
Problems creating log file of mono-service assembly output
The problem here is that your Mono applications are, just like any other source code, outside of the package management system's control. Therefore you need to work out yourself which packages you need to install - nothing will tell you. You have a few options and the simplest of those is to read the developer's documentation. Failing that, you can (as you have started) run strace and see what's missing. The problem then is working out which package that file belongs to. In Ubuntu, you can use apt-file as follows: Install and configure it: $ sudo apt-get install apt-file $ apt-file updateThen find the package with: $ apt-file search gtk-sharp-2.0.pcand it'll tell you that you need to install libgtk2.0-cil-dev for this file. Next run the Mono application again and if need be, strace and apt-file will help you find the missing files.
What are the other Mono dependencies besides mono-runtime and libmono-system-windows-forms4.0-cil required to run C# Windows Forms applications on Ubuntu Linux 16.04? Does the answer to our question depend on the C# source code for our Windows Forms applications? For example, hellboy81 wrote on gitter im mono, Feb 10 01:46.I have Issue with WinForms's TreeView control in Mono under Ubuntu 64 Bit 15.10 (Unity) items are not selectable and selection is not visibleDoes there exist a package update or new package to fix this problem? I believe the other Mono dependency, GTK# 2.0 is found by using: strace mono ./Simple.exe If I add the line: using GTK;to Simple.cs and compile: mcs -r:System.Windows.Forms.dll -r:System.Drawing.dll -pkg:gtk-sharp-2.0 Simple.cs -out:Simple.exe The compiler output is: Package gtk-sharp-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `gtk-sharp-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gtk-sharp-2.0' found.May I ask if gtk-sharp-2.0 is a dependency required for System.Windows.Forms and whether there are any other "Mono" dependencies? Our architect does not want to consider libgtk2.0-cil-dev as an example of a missing Mono dependency. Instead, he wants to find out what dependencies do we need other than mono-runtime and libmono-system-windows-forms4.0-cil for List Box, ComboBox, EditBox, buttons, etc.. Do List Box, ComboBox, EditBox, buttons, etc.. require additional Mono dependencies?
What are all the Mono dependencies required to run all types of C# Windows Forms applications?
That is two very separate questions. First, wine and viruses. The truth is that when running wine you are allowing windows programs to run inside Linux. In theory this allows viruses too. There are some real limits, for example wine doesn't have direct access to hardware, and it's C:\ drive is just a folder in your home directory, but yes you can get windows viruses this way (at least in theory). Second .NET v.s. Mono. This is a bit more complicated, but mono is a CLR bytecode interrupter that aims to run CLR compliant byte code. In essence it is an implementation of .NET that works on many platforms (including windows). There are several programs that use mono (Banshee for one) that have nothing to do with windows or wine. Installing mono for wine gives you some extra features. It replaces (in theory) the Microsoft Common Language Run-time. It's closely equivalent to running either Sun Java or Open Java. They both support Java but they are not the same. So uninstalling .NET inside of wine is up to you. Using Mono instead of the Microsoft CLR executor is up to you as well. It's all a matter of what works better. Take a look at WineHQ for your application and follow the directions there to get started. Uninstalling Mono, however could hurt your system (or not) depending on if it uses applications built for mono. Again java being a good analogy here. You can remove it, unless some of your apps need java. Again here the choice is yours. I recommend focusing on getting your application working in wine of that was the target, and worrying about mono v.s. Microsoft v.s. Wine v.s. system v.s. whatever when you have a better idea what wine is, what it is doing, and what you use it for. If your truly concerned about viruses then take a look at clamav. Keep in mind Linux virus land is not the same as windows and there are different concerns.
When first tried use Wine it asked me to install .NET packages, and other something for HTML, so i did. Later I delete wine, and few days latter reinstalled it. When I tried to use Wine it didn't asked me any more to install .NET packages. So it means it has already one in my system. When googling, I found that it is recommended to use packages from Software Manager called 'mono', and is unsafe to use packages from Windows like .NET due to virus threat. My question is, how can I delete those .NET packages that Wine has installed automatically? And should I use mono for Wine needs?
Is it safe to have .NET packages on Mint 17.1? How can I delete it?
Your directory is still there :) You have renamed it .... Because files whose names start with . are hidden, you cannot see the directory unless you display hidden files run ls -Aand there it is! Revert the change: mv .... original_folder_nameand do the move correctly mv original_folder_name ../..
I used the command mv folder_name ....I thought by using .. twice, it would move it back two folders. Unfortunately my files disappeared. I need to recover them.
Wrong mv command. Where did my files go? [duplicate]
If you know that none if the file names contain new lines, tabs, spaces or glob combinations that may produce a match, this may be easier for a one off case: mv $(grep -L Attachments *) dest_dir
I have this grep command to find files without the word Attachments in them. grep -L -- Attachments *I want to move all the files that are output from that command. How do I do that in bash? Do I use a pipe? Do I use a more wordy if/then statement in a full-on script?
How do I move all files output from a command?
Interestingly enough, it seems the answer may be, "It depends". To be clear, mv is specified toThe mv utility shall perform actions equivalent to the rename() functionThe rename function specification states:This rename() function is equivalent for regular files to that defined by the ISO C standard. Its inclusion here expands that definition to include actions on directories and specifies behavior when the new parameter names a file that already exists. That specification requires that the action of the function be atomic.But the latest the ISO C specification for rename() states:7.21.4.2 The rename function Synopsis #include <stdio.h> int rename(const char *old, const char *new);Description The rename function causes the file whose name is the string pointed to by old to be henceforth known by the name given by the string pointed to by new. The file named old is no longer accessible by that name. If a file named by the string pointed to by new exists prior to the call to the rename function, the behavior is implementation-defined. Returns The rename function returns zero if the operation succeeds, nonzero if it fails, in which case if the file existed previously it is still known by its original name.Surprisingly, note that there is no explicit requirement for atomicity. It may be required somewhere else in the latest publicly-available C Standard, but I haven't been able to find it. If anyone can find such a requirement, edits and comments are more than welcome. See also Is rename() atomic? Per the Linux man page:If newpath already exists, it will be atomically replaced, so that there is no point at which another process attempting to access newpath will find it missing. However, there will probably be a window in which both oldpath and newpath refer to the file being renamed.The Linux man page claims the replacement of the file will be atomic. Testing and verifying that atomicity might be very difficult, though, if that is how far you need to go. You're not clear as to what you mean in your use of "How can I check if mv is atomic". Do you want requirements/specification/documentation that it's atomic, or do you need to actually test it? Note also, the above assumes the two operand file names are in the same file system. I can find no standard restriction on the mv utility to enforce that.
How can I check if mv is atomic on my fs (ext4)? The OS is Red Hat Enterprise Linux Server release 6.8. In general, how can I check this? I have looked around, and didn't find if my OS is standard POSIX.
Is mv atomic on my fs?
rsync copies to temporary filenames (e.g. see Rsync temporary file extension and rsync - does it create a temp file during transfer?) unless you use the --inplace option. It renames them only after the file has been transferred successfully. rsync also deletes any destination files that were only partially transferred (e.g. due to disk full or other error). There is also a --remove-source-files option which deletes the source file(s) after they've been successfully transferred. See the rsync man page for more details. Putting that all together, you could use something like: rsync -ax --remove-source-files source/ target/This option is particularly useful for tasks like moving files out of an "incoming" queue or similar to the directory where they will be processed. Alternatively, if this is a once-off mirror, maybe just use rsync without the --remove-source-files option. You can delete the source files later if you want/need to.
I have to move some files from one filesystem to another under Ubuntu. However, it is very important that the files never exist as partial or incomplete files at the destination, at least not under the correct file name. So far, my only solution is to write a script that takes each file, copies it to a temporary name at the destination, then renames it (which I believe should be atomical) at the destination to the original filename and finally deletes the originating file. However, writing and debugging a script seems like it is overkill for this task. Is there a way or tool that already does this natively?
Approximating atomic move across file systems?
First, let's dispel some myths.it is atomic so inconsistencies cannot happenMoving a file inside the same filesystem (i.e. the rename) system call is atomic with respect to the software environment. Atomicity means that any process that looks for the file will either see it at its old location or at its new location; no process will be able to observe that the file has a different link count, or that the file is present in the source directory after being present in the destination directory, or that the file is absent from the target directory after being absent in the source directory. However, if the system crashes due to a bug, a disk error or a power loss, there is no guarantee that the filesystem is left in a consistent state, let alone that the move isn't left half-done. Linux does not in general offer a guarantee of atomicity with respect to hardware events.first you copy the dir entry in the new dir and then erase entry on previous dir, so you may have the inconsistency of having a file referenced twice, but the ref count is 1This refers to a specific implementation technique. There are others. It so happens that ext2 on Linux (as of kernel 3.16) uses this particular technique. However, this does not imply that the disk content goes through the sequence [old location] → [both locations] → [new location], because the two operations (add new entry, remove old entry) are not atomic at the hardware level either: it is possible for one of them to be interrupted, leaving the filesystem in an inconsistent state. (Hopefully fsck will repair it.) Furthermore the block layer can reorder writes, so the first half could be committed to disk just before the crash and the second half would then not have been performed. The reference count will never be observed to be different from 1 as long as the system doesn't crash (see above) but that guarantee does not extend to a system crash.it first erases the pointer and then copy the pointer so the inconsistency is that the file has reference 0Once again, this refers to a particular implementation technique. A dangling file cannot be observed if the system doesn't crash, but it is a possible consequence of a system crash, at least in some configurations.According to a blog post by Alexander Larsson, ext2 gives no guarantee of consistency on a system crash, but ext3 does in the data=ordered mode. (Note that this blog post is not about rename itself, but about the combination of writing to a file and calling rename on that file.) Theodore Ts'o, the principal author of the ext2, ext3 and ext4 filesystems, wrote a blog post on the same issue. This blog post discusses atomicity (with respect to the software environment only) and durability (which is atomicity with respect to crashes plus a guarantee of commitment, i.e. knowing that the operation has been performed). Unfortunately I can't find information about atomicity with respect to crashes alone. However, the durability guarantees given for ext4 require that rename is atomic. The kernel documentation for ext4 states that ext4 with the auto_da_alloc option (which is the default in modern kernels), as well as ext4, provides a durability guarantee for a write followed by a rename, which implies that rename is atomic with respect to hardware crashes. For Btrfs, a rename that overwrites an existing file is guaranteed to be atomic with respect to crashes, but a rename that does not overwrite a file can result in neither file or both files existing.In summary, the answer to your question is that not only is moving a file not atomic with respect to crashes on ext2, but it isn't even guaranteed to leave the file in a consistent state (though failures that fsck cannot repair are rare) — pretty much nothing is, which is why better filesystems have been invented. Ext3, ext4 and btrfs do provide limited guarantees.
I have two folders on the same partition (EXT2) If I mv folder1/file folder2 and some interruption occur (e.g. power failure) could the file system ever end up being inconsistent? Isn't the mv operation atomic? Update: So far on IRC I got the following perspectives:it is atomic so inconsistencies cannot happen first you copy the dir entry in the new dir and then erase entry on previous dir, so you may have the inconsistency of having a file referenced twice, but the ref count is 1 it first erases the pointer and then copy the pointer so the inconsistency is that the file has reference 0Can someone clarify?
Can the filesystem become inconsistent if interrupted when moving a file?
With zsh: setopt extendedglob # best in ~/.zshrc mv A/^(file|directory)(|2)(D) B/(the (D) to include dot (hidden) files). With bash: shopt -s extglob dotglob failglob mv A/!(@(file|directory)?(2)) B/With ksh93 (FIGNORE='@(.|..|@(file|directory)?(2))'; mv A/* B)
I have a folder A which has files and directories, I want to move all those files and directories to another folder B, except file, file2, directory, and directory2. How can this be done?
Linux command line. Move all files and directories in directory, except some files and directories
I have found the reason for the error (found it on a different forum). The error was due to the hashing algorithm used by ext4 which is enabled by "dir_index" parameter. There were too many hashing collisions for me so I disabled it by the following command: tune2fs -O "^dir_index" /dev/sdb3 The downside is that my partition is slower than before due to no indexing. For more information on the problem : ext4: Mysterious “No space left on device”-errors
I't trying to move around 4.5 million files (size ranges from 100 - 1000 bytes) from one partition to other. The total size of the folder is ~2.4 GB First I tried to zip it and move the zipped file to the new location. It is able to paste only ~800k files and shows "out of space" error. Next I tried the mv command and it also resulted in same condition. Using rsync also resulted in the same error with only ~800k files being moved. I checked the disk free status and it is way under the limit. ( The new partition has ~700 GB free space and the required space is ~2.4 GB). I checked the free inode for that partition it is the same. It is using only ~800k out of the maximum possible 191 M inodes. ( I had actually formatted the partition with 'mkfs.ext4 -T small /dev/sdb3' ) I have no idea of what is going wrong here. Everytime it is only able to copy or move ~800k files only.
Moving millions of small files results in "out of space" error
As you added, the mv command is a script in /bin/mv with this last line: /bin/busybox mv $@This line is missing quotation marks around $@: /bin/busybox mv "$@"$@ denotes the list of parameters given to the script. Quoting this variable has the special meaning that, when expanded, every parameter will be quoted separately. This is valid for at least bash, dash and also busybox. This way, the mv command should also work when an argument contains quoted whitespace.
I'm trying to rename some files on my Synology Diskstation via SSH. The available shell is the BusyBox built-in shell: BusyBox v1.16.1 (2013-04-16 20:13:10 CST) built-in shell (ash)The move command always yields two errors when I try using a space character either in the source or destination filename. Escaping space characters or quoting the filename seems to have no effect. Example for renaming a file with a space character in the destination: /volumeUSB1/usbshare/directory $ touch test /volumeUSB1/usbshare/directory $ ls test /volumeUSB1/usbshare/directory $ mv test 'te st' mv: can't rename 'test': No such file or directory mv: can't rename 'te': No such file or directory /volumeUSB1/usbshare/directory $ mv test te\ st mv: can't rename 'test': No such file or directory mv: can't rename 'te': No such file or directoryRenaming a file with a space character in the source yields similar results: /volumeUSB1/usbshare/directory $ touch 'te st' /volumeUSB1/usbshare/directory $ ls te st /volumeUSB1/usbshare/directory $ mv 'te st' test mv: can't rename 'te': No such file or directory mv: can't rename 'st': No such file or directory /volumeUSB1/usbshare/directory $ mv te\ st test mv: can't rename 'te': No such file or directory mv: can't rename 'st': No such file or directorytype mv returns mv is /bin/mv. The file command is not available on my machine. cat /bin/mv revealed that it's a small script that ends with calling /bin/busybox mv $@. Where is my mistake?
Rename files with spaces in a BusyBox shell
Which system are you running? On Linux, that behaviour is configurable, through /proc/sys/fs/protected_hardlinks (or sysctl fs.protected_hardlinks). The behaviour is described in proc(5):/proc/sys/fs/protected_hardlinks (since Linux 3.6) When the value in this file is 0, no restrictions are placed on the creation of hard links (i.e., this is the historical behavior before Linux 3.6). When the value in this file is 1, a hard link can be created to a target file only if one of the following conditions is true:The calling process has the CAP_FOWNER capability ... The filesystem UID of the process creating the link matches the owner (UID) of the target file ... All of the following conditions are true: the target is a regular file; the target file does not have its set-user-ID mode bit enabled; the target file does not have both its set-group-ID and group-executable mode bits enabled; and the caller has permission to read and write the target file (either via the file's permissions mask or because it has suitable capabilities).And the rationale for that should be clear:The default value in this file is 0. Setting the value to 1 prevents a longstanding class of security issues caused by hard-link-based time-of-check, time-of-use races, most commonly seen in world-writable directories such as /tmp.On Debian systems protected_hardlinks and the similar protected_symlinks default to one, so making a link without write access to the file doesn't work: $ ls -ld . ./foo drwxrwxr-x 2 root itvirta 4096 Jul 11 16:43 ./ -rw-r--r-- 1 root root 4 Jul 11 16:43 ./foo $ mv foo bar $ ln bar bar2 ln: failed to create hard link 'bar2' => 'bar': Operation not permittedSetting protected_hardlinks to zero lifts the restriction: # echo 0 > /proc/sys/fs/protected_hardlinks $ ln bar bar2 $ ls -l bar bar2 -rw-r--r-- 2 root root 4 Jul 11 16:43 bar -rw-r--r-- 2 root root 4 Jul 11 16:43 bar2
Example script: #!/bin/sh -e sudo useradd -m user_a sudo useradd -m user_b -g user_a sudo chmod g+w /home/user_aset +e sudo su user_a <<EOF cd umask 027 >> file_a >> file_b >> file_c ls -l file_* EOFsudo su user_b <<EOF cd umask 000 rm -f file_* ls -l ~user_a/ set -x mv ~user_a/file_a . cp ~user_a/file_b . ln ~user_a/file_c . set +x ls -l ~/ EOF sudo userdel -r user_b sudo userdel -r user_aOutput: -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c total 0 -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c + mv /home/user_a/file_a . + cp /home/user_a/file_b . + ln /home/user_a/file_c . ln: failed to create hard link ‘./file_c’ => ‘/home/user_a/file_c’: Operation not permitted + set +x total 0 -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a -rw-r----- 1 user_b user_a 0 Jul 11 12:26 file_b userdel: user_b mail spool (/var/mail/user_b) not found userdel: user_a mail spool (/var/mail/user_a) not found
Why can I not hardlink to a file I don't own even though I can move it?
With zsh, you could do: for dir in **/*(NDodoN/e['[[ $REPLY:t = $REPLY:h:t ]]']); do contents=($dir/*(NDoN)) (( $#contents == 0 )) || mv -- $contents $dir:h/ && rmdir -- $dir doneWhere:**/*(qualifiers) recursive globbing with glob qualifiers N: nullglob: don't complain if there's no match D: dotglob: include hidden files od: order depth first (leaves before the branches they're on). oN: don't bother with ordering the list of files otherwise. /: restrict to files of type directory. e['expression']: restrict to files for which the expression code returns true (inside which the current file path is stored in $REPLY). $REPLY:t: tail (basename) of the file $REPLY:h:t: tail of the head (dirname) of the files)With bash 4.4+ and GNU find or the find or most BSDs, you could do something similar with: shopt -s nullglob dotglob readarray -td '' dirs < <( LC_ALL=C find . -depth -regex '.*\(/[^/]*\)\1' -type d -print0 ) for d in "${dirs[@]}"; do contents=("$d"/*) (( ${#contents[@]} == 0 )) || mv -- "${contents[@]}" "${d%/*}/" && rmdir -- "$d" doneThis time using a regular expression to match the ./path/to/dir/dir files using basic regular expression back-references.
I need a way to search directories for child directories with the same name and then move all files in the child directory to the parent. Thus from /recup-dir1/recup-dir1/files to /recup-dir1/files. The child directories can be left empty because i can use something like find . -type -d -empty -delete to delete all empty dirs So the problem is i have no idea in which directories there are the child directories with the same name and in which there are not. In pseudo code i need something like this. While more directories are unchecked get name-x of next dir enter dir If name-x/name-x exist move all files in name-x/name-x to name-x mark dir as done next My best guess is to create a little python script to make a list of all directories which have a child with the same name and loop this list throug a command like find something something -exec mv Maybe this could be done with bash scripting or another solution exists. Like some rsync command, however since i created this mess probably with rsync i don't think that will be the solution. Edit: here is an actual part of the tree output: The toplevel dirs are inside /mnt/external-disk/tst-backup There are no sub-dirs on lower levels. │ └── recup_dir.1 ├── recup_dir.10 │ └── recup_dir.10 ├── recup_dir.100 │ └── recup_dir.100 ├── recup_dir.102 │ └── recup_dir.102 └── recup_dir.1020 └── recup_dir.1020
Parent child directory same name , move files to parent directory
Assuming that "Main Directory"/Test exists: mv "Main Directory"/Sub[1-3] "Main Directory"/TestThe only thing happening here is that you move the directories into the Test directory. The files in Sub1, Sub2 and Sub3 will still be available in those same directories, but now under the new path "Main Directory"/Test/Sub1 etc.With updated information in the comments below, assuming bash is used as the shell: mkdir -p "Main Directory"/Test mv "Main Directory"/episode_{0000..0049} "Main Directory"/TestThe brace expansion "Main Directory"/episode_{0000..0049} would expand to Main Directory/episode_0000 Main Directory/episode_0001 ... Main Directory/episode_0049.
I have a directory which contains multiple directories including subdirectories too. I want to move some of them to a single one at the same time (with one command) Example Main Directory Sub1 Subsub1 Subsub2 Sub2 Subsub1 Subsub2 Sub3 Subsub1 Subsub2 Sub4 Subsub1 Subsub2 Sub5 Subsub1 Subsub2 Test -----------------------I want to move Sub1, Sub2, Sub3 including their subdirectories, into Test folder, so finally I will have something like this Main Directory Sub4 Subsub1 Subsub2 Sub5 Subsub1 Subsub2 Test Sub1 Subsub1 Subsub2 Sub2 Subsub1 Subsub2 Sub3 Subsub1 Subsub2 -----------------------
move multiple directories in one directory - recursively
mv works in two ways. mc moves behave the same way. If the files are on the same logical device (partition or disk), only the directory entries are moved. This can be extremely fast. If the files are on different logical devices, the files are copied and the old file deleted after the copy is done. This is relatively slow as the file must be read, and then written.If your NAS has multiple disks, then moves may result in data moving between devices. You can check which directories are mounted (and their space utilization) with the df command.
I have an QNAP TS-210 NAS and it seems, that there's a Debian Linux on board. I've installed Midnight Commander there and have been using it successfully for years. Today I spotted something weird. I have to move a large collection of movies (around 130 GB) from one folder to anther. Movies are split into many subfolders. Whenever I enter any particular folder, select all files in MC and press F6 to move them to destination, everything is fine. But, when I attempt to move entire subfolder (destination has it as well, but files inside are different, so there is not overwrite in any case), process takes very, very long. It actually looks like my NAS would be doing a copy, instead of move of this file. Is this normal condition? When I'm doing the same operation on Windows, move process always is very, very fast, no matter, if I'm moving entire folder or only its contents.
Midnight Commander moves files enormously long
The actual command described in the OP's link is xrandr --reflect [x|y|xy]. Its intended uses are for e.g. when you are displaying on a projector that's mounted upside down to the ceiling, and/or projecting to a through-projection screen so the image needs to be flipped to be right-way-round for the viewers. In both cases, it would be expected that both the display and the mouse movements would be flipped the same way: the xrandr --reflect is meant to affect the totality of the GPU's output for a particular display device, to compensate for either a non-standard positioning of the display device itself, or an optical device that mirrors the output of the display device. To "un-reverse" the mouse motions while keeping the screen mirrored, you would have to use the coordinate transformation matrix commands as suggested by A.B in their earlier deleted answer. Example: xinput indicates my wireless mouse is identified as as Logitech USB Receiver, and xrandr indicates my display output is DP-0. xinput --set-prop --type=float "Logitech USB Receiver" 'Coordinate Transformation Matrix' -1 0 1 0 1 0 0 0 1Now my display is normal, but when I move the mouse left, the pointer moves right, and vice versa. In other words, the mouse motion is mirrored in the X axis. xrandr --output DP-0 --reflect xAfter adding this command, my entire display is mirrored in the X direction: the mouse arrow pointer points to top-right instead of top-left. But because of the earlier command to invert the mouse motion in the X direction, the mouse "feels" normal. xinput --set-prop --type=float "Logitech USB Receiver" 'Coordinate Transformation Matrix' 1 0 0 0 1 0 0 0 1This returns the mouse motion transformation matrix to standard, so while the xrandr --reflect x is still in effect, now both the mouse directions and the desktop view are flipped in X direction. To move the mouse to display coordinates (0,0), I would move the mouse up and left as usual, while the mouse pointer moves to the top right corner of the screen. xrandr --output DP-0 --reflect normalThis undoes the output reflection, returning everything back to normal.
Sorry if it sounds confusing, but this is not a biology inquiry. I'm planning to switch to Linux, but I have a few questions need answered. If a single monitor is mirrored like described in this link, then will we see the pointer onscreen move similarly to, or contrarily to our physical mouse-hand movement? In case it is reversed, is there any way to make the mouse move like in normal setups while keeping the screen mirrored?
How would the mouse behave in a mirrored environment?
You can use the -exec action in find so that you can do shell string manipulation on the filename. The parameter expansion "${var%.*}" can be used to remove the extension. Below is an example. find . -type f -name '*.css.gz' -exec bash -c 's3cmd put --acl-public --add-header="Content-Encoding: gzip" "$1" "s3://mybucket/assets/${1%.*}"' -- {} \;
Currently I use three steps to gzip some static assets and then use s3cmd to an S3 bucket (technically it's a Digital Ocean Spaces bucket). Here's what I do:$ find . -type f -name '*.css' | xargs -I{} gzip -k -9 {} $ find . -type f -name '*.css.gz' | xargs -I{} s3cmd put --acl-public --add-header='Content-Encoding: gzip' {} s3://mybucket/assets/{} But then I have to manually change all of the extensions in my bucket to remove the .gz extension.Is there a way that I won't have to manually do step 3? I'd love to know if it's possible in step 2 to remove the .gz extension in the destination. I do want to keep the original files on my server though, so that's a deal breaker.
Find files then move them and rename them at the same time?
I think this cannot be done without a separate removal. rsync allows to a) remove source files (but files only) via --remove-source-files and b) delete files in target that do not exist in source (--delete). Best to be used with -nv to make a verbose dry-run. Depending on the version of rsync, -recursivity may be needed. Nevertheless this you will have the empty directory tree left in source and directories not mentioned in source will remain untouched in target. Long story short: use two commands and the simples option would be: rm "$BACKUP_DIR"/scripts_old && mv "$BACKUP_DIR"/scripts "$BACKUP_DIR"/scripts_old
When trying to move some folder I am getting error: "cannot move File exists" export BACKUP_DIR=/backup mv -f $BACKUP_DIR/scripts $BACKUP_DIR/scripts_oldGetting error: mv: cannot move '/backup/scripts' to '/backup/scripts_old/scripts': File existsI've tried with -f option but without it as well - same error. How I can move this? Thanks!
mv: cannot move "File exists"
The > here redirects stdout to a file, like it would in a more normal use case: printf "%s\n" "hello world" > filenameThe spaces around > are optional, and it doesn't have to go at the end. This does the same: printf "%s\n">filename "hello world" So your mv line would more conventionally be written: mv /u01/app/oracle/product/12.1.0.2/db_1/dbs/ /u01/shared_data/oradata/TEST/test.dbf > test2.dbfwhich has renamed your folder to test.dbf (in a different directory) and written mv's stdout (probably nothing) to test2.dbf in your current directory. Hopefully that didn't accidentally overwrite an Oracle datafile. PS: If extra > are a frequent problem, bash's set -o noclobber/set -C option can at least help prevent overwriting files. The bash manpage describes it as: If set, bash does not overwrite an existing file with the >, >&, and <> redirection operators. This may be overridden when creating output files by using the redirection operator >| instead of >.
I tried moving a file in Linux from one directory to another and accidentally had an extra > character. mv /u01/app/oracle/product/12.1.0.2/db_1/dbs/>test2.dbf /u01/shared_data/oradata/TEST/test.dbfNow, my entire dbs folder is missing. However when I locate the dbs folder /u01/app/oracle/product/12.1.0.2/db_1/dbs it appears to be there but I can't ll or cd to it: -bash: cd: /u01/app/oracle/product/12.1.0.2/db_1/dbs: No such file or directoryHow do I get the original back to where it was?
Had an extra > character when moving a file, now the folder is missing
The easy way: for f in /some/path/*; do if [ -f "$f" ]; then mv "$f" /some/other/path fi doneThe slightly more complicated way: find /some/path -mindepth 1 -maxdepth 1 -type f -exec mv {} /dome/other/path \;
How do you move all the files (excluding sub-directories) from one directory to another. I'd prefer if the solution used just basic shell scripting.
How to move all files (excluding sub-directories) from one directory to another?
You could iterate on tar files and move them to parent directory '..', like that: for tar in $(find path -name '*.tar.gz'); do mv $tar $(dirname $tar)/..; done
I have tar.gz files accessible from ./parent/subfolder/tar_file_folder/*tar.gzAnd I want to find them and move them into the ./parent/ directory, therefore one level up. However, there are several subfolders and tar_file_folders. Therefore, I want to call my command from the parent folders. I have tried this command line: cd ./parentfind -name '*.tar.gz' -exec mv {} /path/to/single/target/directory \;However, I am not quite sure how to specify the path to one level up. Any help? Two examples of path where in this case the tar files has to be either move to ./AU-ASM folder or ./AR-Vir/ folder: ./AU-ASM/[emailprotected]/LT51030752011211-SC20161014133947.tar.gz./AR-Vir/[emailprotected]/LT51030751995263-SC20161014133510.tar.gz’And this is an example on where the tar.gz are stored: ./Landsat_Data/AR-Vir/[emailprotected]/*tar.gzThe command line needs to be run from the ./Landsat_Data/ directory
Find all tar.gz files and move them to a one level down directory
See https://www.kernel.org/doc/Documentation/misc-devices/lis3lv02d. The DV7 should have the mentioned accelerometer. I can't come up with a specific implementation but the Thinkpad community came up with some ideas for their model (with a different driver). Just as a starter: http://www.thinkwiki.org/wiki/Script_for_theft_alarm_using_HDAPS
I'm trying to figure out if there's a way to use a bash or python script to tell if my laptop gets moved. I had the idea of using wifi signal strength to do it, but the problem with that is that you have to be pretty far from the router before the signal drops off. One idea I had was to somehow use a Raspberry Pi with a cheaper wifi module that would send a weaker signal, and query that to see how far from the Raspi the laptop is. But, I was wondering if this has been done before, and if so, if there's a way to tell without using wifi (as it seems quite inaccurate). I had the idea of using the camera, but I haven't been able to set up my integrated webcam with Ubuntu yet, and besides, I'd rather not have it always-on.
Is it possible somehow to tell if a laptop is moved in a bash or python script?
Already answered. Here is version adapted to this task: D=$(readlink -f "2"); (cd "1" && find . -type f -print0 | cpio --pass-through --null --link --make-directories "$D") && rm -Rf 1After this command I have exactly what I wanted: $ find 1 2 -printf '%i %p\n' find: `1': No such file or directory 40011806 2 40011450 2/t 40011458 2/y 40011924 2/a 40011217 2/a/q 40014006 2/a/e 40013945 2/a/wRead notes about usage in the original answer (linked above).
I have: $ find 1 2 -printf '%i %p\n' 40011805 1 40011450 1/t 40011923 1/a 40014006 1/a/e 40011217 1/a/q 40011806 2 40011458 2/y 40011924 2/a 40013989 2/a/e 40013945 2/a/wI want: <inode> <path> any 2 40011450 2/t 40011458 2/y any 2/a 40014006 2/a/e 40011217 2/a/q 40013945 2/a/wHow do do it?
How do I merge (without copying) two directories? [duplicate]
This is a possible solution: cat strings.txt | xargs -I '%' find . -type f -name "%*" -exec mv -t your_path {} +Moved GCA_000007405.1_ASM740v1_protein.faa that is the only match.
I have a file strings.txt with a list of strings: GCA_001677475.1 GCA_003410275.1 GCA_002310615.1 GCA_000007405.1 GCA_000219515.3And I have many files in a directory with names like: GCA_000005845.2_ASM584v2_protein.faa GCA_000006925.2_ASM692v2_protein.faa GCA_000007405.1_ASM740v1_protein.faa GCA_000007445.1_ASM744v1_protein.faa GCA_000008865.2_ASM886v2_protein.faa GCA_000009565.2_ASM956v1_protein.faaI need to move only those files whose names start with a pattern from strings.txt. So far I've tried using xargs with mv: cat strings.txt | xargs -I % mv %*faa ./DataBut mv doesn't see %*faa as a regex and tries to find files with this exact name (and of course it can't find any). I've also tried using ls but it works the same way: cat strings.txt | xargs -I {} ls {}*faaSo how can I do that?
How to move all files with names starting with a string from a list?
* is a glob operator of the shell. It needs to be left unquoted to be recognised as such. When quoted, /media/sf_Mediaserver3/test22/abbamax.(6th.copy)..kansas.(1999)/* is passed literally to mv and mv tries to move that file called *, and there's no such file. So you need: mv -v -- "$jdir0"/* "$jdir0/subs/" >> "$debuglog" 2>&1For the shell to expand "$jdir0"/* into the list of matching files before calling mv. You do not want nullglob here as that would mean that in the absence of files matching that "$jdir0"/* pattern, mv would be invoked with just -v, -- and media/sf_Mediaserver3/test22/abbamax.(6th.copy)..kansas.(1999)/subs/ causing a confusing syntax error by mv. failglob to abort command when the globs don't match may be a better option in that case, though note that bash aborts in inconsistent ways in that case depending on the context the command is invoked in, which makes that option tricky to use in scripts. dotglob is to allow globs to match hidden files. Now, note that globs match files regardless of their type¹, so that * above will also match subs. If subs is a symlink to a directory, mv will happily move that subs symlink into that directory, causing all subsequent moves to fail as the subs target directory is now gone. If subs is a plain subdirectory, mv will likely complain that it can't move a directory into itself. So you may want to write it instead: shopt -s extglob mv -v -- "$jdir0"/!(subs) "$jdir0/subs/" >> "$debuglog" 2>&1Where !(pattern) is the ksh extended glob operator that matches on any filename that does not match pattern, so here moving any file but subs. Also note that in the bash shell, parameter expansions also need to be quoted when in targets of redirections even in non-interactive shell instances (except when bash is in POSIX mode).¹ unless you use zsh instead of bash and its glob qualifiers such as *(.) to move only regular files
I'm having trouble with moving files in a bash script. I've been trying different solutions that I've found here, on the same problem, but can't find anything that works.. my last attempt was adding shopt -s dotglob nullglob but that didn't solve anything.. In this test, jdir0="/media/sf_Mediaserver3/test22/abbamax.(6th.copy)..kansas.(1999)"mv -v "$jdir0/*" "$jdir0/subs/" &>> $debuglog.. and I get: mv: cannot stat '/media/sf_Mediaserver3/test22/abbamax.(6th.copy)..kansas.(1999)/*': No such file or directorybut, yes, there are! drwxrwx--- 1 root vboxsf 4096 Aug 22 07:06 ../ -rwxrwx--- 1 root vboxsf 0 Aug 21 17:19 'kallee.(222)..nnn.srt'* -rwxrwx--- 1 root vboxsf 159363 Aug 21 17:26 'movie.test(2929).ismim.mp4'* drwxrwx--- 1 root vboxsf 0 Aug 22 07:06 subs/(the reason the names are really strange is that I before this function is testing to remove invalid chars) update: Apparently I got intermittent errors, and I've finally after days traced it back to server issue (where the files where stored). Apparently these errors occurred if the server wasn't finished with the save/name change, and the script asked it to do something new. For example renaming file A to B, and then asking it to rename B to C BEFORE the server had executed the first request, which resulted in that the server said: B don't exist, which of course caused an error code.
Moving files with " and * caused error
Everything can be done much simpler if using bash's abilities: #!/bin/bash for file in *.sac do year=${file:0:4} day=${file:5:3} range_start=$(( (($day-1)/3)*3+1 )) range_end=$(( $range_start+2 )) ndir=$(printf "%04d.%03d.%03d" $year $range_start $range_end) mkdir -p archive/$ndir mv $file archive/$ndir/ done
I have several files like these: 2020.001.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.001.03.04.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.002.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.003.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.004.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.004.05.06.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.005.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.006.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac ... 2020.366.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sacThe files are year.day.hour1.hour2... The important parameters are the year and day. The day goes from 001 to 366 (or 365). What I want is to organize my files like this (folder and files). So create the folder and move the respective files into it: 2020.001.003 - 2020.001.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.001.03.04.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.002.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.003.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac2020.004.006 - 2020.004.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.004.05.06.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.005.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sac 2020.006.00.01.pcc1_TBTZ_TBTZ_1.0-3.0.sacand so on until finishing the files in day 366 What I did (which does not work as I want): for file in *.sac do year=`echo "$file" | awk -F"." '{print $1}'` day=`echo "$file" | awk -F"." '{print $2}'` dayi=$day dayf="366" # # Moving every three day files delta=2 x1=$dayi x2=$(echo "$x1+$delta" | bc) if [ $x1 -lt $x2 ] then echo $x1 $x2 dir=$(echo "$year"."$x1"."$x2") mkdir $dir x1=$(( x1+3)) x2=$(( x1+delta)) fi doneAs a result the code is creating folders like: 2020.001.3 2020.002.4 2020.003.5 ...Basically it is creating folders that I do not need. Also I still do not know how to move the files into the folders. Thanks for your help.
How to move every three day file based on the filename to respective folders in bash
bash + awk solution: for f in $(awk 'NR > 1{ print $7 }' move.txt); do [[ -f "$f" ]] && mv "$f" ~/destination doneOr with xargs: awk 'NR > 1{ print $7 }' move.txt | xargs -I {} echo mv {} ~/destinationThe crucial awk operation implies:NR > 1 - start processing from the 2nd line (skip the 1st one as header) print $7 - print the 7th field value $7 (tar column)
I have tar.gz files like below in a directory df: A.tar.gz B.tar.gz C.tar.gz D.tar.gz E.tar.gz F.tar.gz G.tar.gzI also have text file move.txt with following columns information: ID Status Status2 Status3 Status4 Status5 tar sample ID1 Negative Negative Negative Negative Negative D.tar.gz Sam1 ID2 Negative Negative Negative Negative Negative A.tar.gz Sam2 ID3 Negative Negative Negative Negative Negative C.tar.gz Sam3 ID4 Negative Negative Negative Negative Negative F.tar.gz Sam4I want to move files in directory df to another directory based on their matching in move.txt file I tried this way but didn't work: for file in $(cat move.txt) do mv "$file" ~/destination doneOutput should be in ~/destination directory: D.tar.gz A.tar.gz C.tar.gz F.tar.gzLooks like I'm missing the column in the text file. Any help?
How to move the files to new directory based on names in text file?
As long as your source and destination paths are on the same filesystem, mv won't actually "move" anything. It'll just edit your directories' and files' metadata (inodes and links), but the data blocks themselves won't move. For instance, assuming that /home and /srv are on different filesystems, you'll observe the following: $ mv /home/bigfile.txt /home/mydir/ # Instant. $ mv /home/bigfile.txt /srv # Takes time.If you're moving all that data from a filesystem to another, then it has to be physically copied from a disk section to another: data blocks need to be moved, and that can take time (and to be honest, you can't do much about it). Doing it over SSH does not change a thing. SSH stands for Secure Shell, meaning you're getting an actual remote shell, not just using your machine as a relay for everything. Whatever you request from your remote machine through SSH is handled remotely.
I have a remote Debian test server to which I connect via SSH (Putty client). I wanted to move a lot of files on a remote machine to another folder: remote: /mnt/a/ -> remote: /mnt/b/c/ RESULT@remote: /mnt/b/c/a/I've used a move (mv) command connecting to a remote computer with SSH from my local computer: mv /mnt/a/ /mnt/b/c/I did that with about 700 MB of data (about 5 files) and it took forever to copy those files. Does the mv command transfer the files to the local computer through SSH and then sends it back to another folder on the server? If so, is there any other command I can use to move the files only locally on the remote server?
Moving files from folder to folder on a remote server using SSH without downloading those to local computer
Assuming, for simplicity, that your original data is in directory a: a ├── d1 │ ├── f1 │ └── f2 └── d2 ├── f3 └── f4and that you have a directory b which contains the same files as a, organized as a different directory structure: b ├── d1 │ └── f3 ├── d2 │ ├── f1 │ └── f2 └── d3 └── f4To rearrange the files in b to match a's hierarchy without copying anything from a to b: export orig=a dest=b find "$orig" -type f -exec sh -c ' for file; do target=$dest${file#$orig} target=${target%/*} mkdir -p -- "$target" find "$dest" -type f -name "${file##*/}" \ -exec mv -i -- \{\} "$target/" \; done ' mysh {} +This not-really-efficient code (it spawns a new find process for every file in a):searches for every regular file in a, defines a target directory as the file's parent directory with a replaced by b, creates the target directory (makedir -p doesn't complain about already existing directories and also creates all the needed parents), searches for every file in b named as the current one and moves them to the target directory; mv -i asks before overwriting, to avoid losing data if two files in distinct subdirectories of b happened to have the same name.You may then want to remove regular files or directories in b (such as d3 in our example) which are not in a: export orig=a dest=b find "$dest" \( -type f -o -type d \) -exec sh -c ' target=$orig${1#$dest} [ ! -e "$target" ] ' mysh {} \; -deleteThe final result is: b ├── d1 │ ├── f1 │ └── f2 └── d2 ├── f3 └── f4
I'm looking for some way to organize the files contained in directory b to make its structure equal to that of directory a (which contains the same files as b, just arranged in a different way), without copying or moving anything from directory a. That way seek some advanced use of mv command with output from awk or/and sed commands, as following images. Model directories before as Errors a and it without modifications, as Errors b: . . └── Errors a └── Errors b ├── Eltendorf ├── Eltendorf │ ├── 2013 March 09.txt │ ├── 2013 March 09.txt │ ├── 2014 November 07.txt │ ├── 2014 November 07.txt │ ├── 2016 August 03.txt │ ├── 2016 August 03.txt │ └── 2017 October 02.txt │ └── 2017 October 02.txt ├── Gettendorf ├── Gettendorf │ ├── 2011 August 05.txt │ ├── 2011 August 05.txt │ ├── 2014 October 02.txt │ ├── 2014 October 02.txt │ ├── 2014 October 09.txt │ ├── 2014 October 09.txt │ └── 2015 November 08.txt │ └── 2015 November 08.txt ├── Krensdorf ├── Krensdorf │ ├── 2010 August 04.txt │ ├── 2010 August 04.txt │ ├── 2010 November 04.txt │ ├── 2010 November 04.txt │ └── 2012 August 09.txt │ └── 2012 August 09.txt └── Ritzing └── Ritzing ├── 2013 March 01.txt ├── 2013 March 01.txt ├── 2013 March 02.txt ├── 2013 March 02.txt ├── 2013 March 03.txt ├── 2013 March 03.txt └── 2018 November 02.txt └── 2018 November 02.txtContents directories Errors c before, and after, as desired, as Errors d: . . └── Errors c └── Errors d ├── Eltendorf ├── Eltendorf │ ├── 2010 November 04.txt │ ├── 2013 March 09.txt │ ├── 2013 March 02.txt │ ├── 2014 November 07.txt │ ├── 2014 November 07.txt │ ├── 2016 August 03.txt │ └── 2014 October 09.txt │ └── 2017 October 02.txt ├── Gettendorf ├── Gettendorf │ ├── 2012 August 09.txt │ ├── 2011 August 05.txt │ ├── 2013 March 03.txt │ ├── 2014 October 02.txt │ ├── 2014 October 02.txt │ ├── 2014 October 09.txt │ └── 2017 October 02.txt │ └── 2015 November 08.txt ├── Krensdorf ├── Krensdorf │ ├── 2010 August 04.txt │ ├── 2010 August 04.txt │ ├── 2013 March 01.txt │ ├── 2010 November 04.txt │ ├── 2015 November 08.txt │ └── 2012 August 09.txt │ └── 2018 November 02.txt └── Ritzing └── Ritzing ├── 2013 March 01.txt ├── 2011 August 05.txt ├── 2013 March 02.txt ├── 2013 March 09.txt ├── 2013 March 03.txt └── 2016 August 03.txt └── 2018 November 02.txtThat way directory c should become equal to directory a without copying directory a contents.
Organize directory b and its subdirectories files equal as directory a without copying or moving from directory a
The following should do: for file in ~/fileList/* do newpath=$(head -1 "$file" | tr '-' '/') mkdir -p "$newpath" echo "would now execute 'mv $file $newpath'" doneThis will store the transformed date in the variable $newpath and create that directory along with any missing parents if it doesn't yet exist. The subdirectory will be created in the directory where you run the script. In its current form it will print the command that would be executed. When you are satisfied, change to mv "$file" "$newpath"Update: Since you stated that the target directories will be located below the source directory where your files originally reside, checks are necessary to ensure that should you run the script multiple times, it doesn't stumble upon these newly-generated entries.If you can identify the files by their extension (e.g. .txt), you can make the loop more speficic: for file in ~/fileList/*.txt do newpath=~/fileList/$(head -1 "$file" | tr '-' '/') ... doneOtherwise, one way is to only accept regular files as operands: for file in ~/fileList/* do if [[ ! -f $file ]]; then continue; fi newpath=~/fileList/$(head -1 "$file" | tr '-' '/') mkdir -p "$newpath" mv "$file" "$newpath" done
I am currently trying to write a script that organizes a list of files into subdirectories by their dates. The dates are contained on the first line of every file in the format YYYY-MM-DD. I have extracted the first line from every file using the head command and replaced the - with / so it becomes YYYY/MM/DD. This is so I can use it as file path. I then want to create subdirectories YYYY>MM>DD where for example, the file containing the date 2015/10/19 is stored in the file path ./2015 -> ./2015/10 -> ./2015/10/19. These subdirectories should be created in ~/fileList/. This is the code I have so far. for file in ~/fileList/* do head -1 $file | tr "-" "/") doneThis is a sample file 2001-02-03 Thursday Paris 44952I'm struggling to find a way to create the subdirectories and store each file in the correct directory. Any code suggestions would be much appreciated. I am using the bash shell.
Organizing files by dates contained within them
With find and single linear: find . -mindepth 2 -type f -execdir sh -c 'mv -vt ../ "$@" ; rmdir "$PWD"' _ {} +-mindepth 2 will let the find command to ignore current directories' files. -execdir this is important here, and this make find to change the current directory to the directory where a file found and the commands inside will run on that directory itself. mv -vt ../ "$@", this will expand to mv -vt ../ "file1" "file 2" "..." "fileN" rmdir "$PWD" will delete the directory where -execdir is there which will run after all files mived up to the parentDirectory.Ba careful you won't overwrite the files with same fileName when moving to destination path.
I have a bunch of directories that contain MP3 files inside. These directories contain no other directories inside. How do I delete all directory structure without deleting the files? That would be basically move all files found inside these directories to the current directory. The current directory is the directory where the other directories are.
copying files recursively without preserving directories
From your description it sounds like you likely dragged your files into another directory. To locate them you can wait a day and there's typically a cron job that runs on most distros that does a index of the files on your system using slocate. Once this completes you can locate your files using the locate command: $ locate <string>Where <string> is either the name of the directory or a file within the directory that you accidentally dragged somewhere.
I was moving a directory on my parents computer using a teamviewer session. The session responds slow and somehow, when I was moving the directory (location: /sdb/synchronisatie/nel/) something went wrong I guess but now the directory is gone! And I cannot find it anywhere... I'm sure I've used 'paste' but I think nothing happened and the directory hasn't been pasted. Now the directory is gone as I said and I cannot find it anymore. I tried testdisk but testdisk didn't list the directory. The directory contains all the files from my mother so I am really hoping I can find this directory again. What can I do?
lost directory cannot find it with testdisk
I can't provide a solution or a workaround, but I guess I can explain it: As with every major version in recent times, in TB 115, the TB folks have managed to break all sorts of things that had been working in previous versions. There are still dozens of new bugs of various severity in TB 115, and I am quite sure that your problem is due to one of them. IMHO, your options are:Report the bug on bugzilla. But be prepared that nothing will happen about it for a long time; some bugs there are several years old without anybody ever caring. However, bug reports are recommended in every case.Live with the situation and hope that it will be resolved one day. This is not completely unlikely if a lot of other users have the same problem.A good part of TB is based on JavaScript. Eventually you can figure out what's going on and correct it yourself, or you could ask developers you know, e.g., in your company.Finally, another option, and probably the best one, would be to switch to Betterbird. BB is based on the current TB version, but has additional bug fixes and features added. In contrast to the TB team, its author is very helpful, fast and competent. I am personally using BB since several years and confirm that it is always based on the latest TB version. BB profiles are 100% compatible with TB profiles, so you can switch between the two at any time (given you are using the same version). [ Disclaimer: I am not involved in or related to the Betterbird project in any way, except that I am in close contact with its author from time to time to ask him to fix certain TB bugs in BB, and to help him with such bug fixes by thorough tests in production environments with large email stores. ] If you report the bug at bugzilla as described above and then drop a comment below with a link to the bug report, I eventually could ask the BB author whether he's willing to fix the problem.
I recently upgraded Thunderbird to version 115.0.1 and since then, the MOVE TO dropdown doesn't work when I am reading an e-mail. I am referring to this option:When I browse my e-mails (without opening the mail itself), and I use this dropdown, the e-mail that is selected will move to the chosen folder just fine. However, when I open a particular e-mail to read, then the functionality breaks down. The dropdown is still there, and I can still select a folder to move the e-mail to, but it doesn't do anything. I looked at the log, and this error is produced when selecting a folder to move the mail into:console.error: "An error occurred executing the cmd_moveMessage command: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBView.doCommandWithFolder]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: chrome://messenger/content/mailCommon.js :: cmd_moveMessage :: line 167" data: no]"I am using POP3.
Can't move opened e-mails in Thunderbird
Assuming your files has no whitespaces in their name as shown in your question which they don't, then you could use a simple shell-loop to move the files into thier corresponding directories in the second column. while IFS=' ' read -r fileName dirName; do mkdir -p "./$dirName" && mv "./$fileName" "./$dirName"; done <rules.txtif your rules.csv is really a .csv file (comma delimited file), then you can change the IFS=, above (still remember that your files name or directories name should not contain a comma character then).
I have 1 x big folder that has lots of .txt files I am trying to group these .txt files in separate subfolders based on specific rules from a rules.csv file (on what subfolder they belong to): LARGE FOLDER: file1.txt file2.txt ... file100.txt The rules would be: file1.txt file3.txt file8.txt belong to "subfolder1" file2.txt file4.txt file23.txt belong to "subfolder2" etc Here's the list of rules in a CSV (rules.csv): first column are the filenames and 2nd column is the subfolder in which I wanna move them to. file1.txt subfolder1 file3.txt subfolder1 file8.txt subfolder1 file2.txt subfolder2 file4.txt subfolder2 file23.txt subfolder2file5.txt subfolder3 file6.txt subfolder3 file9.txt subfolder3 file11.txt subfolder3 file16.txt subfolder3file12.txt subfolder4 file13.txt subfolder4 file14.txt subfolder4 file19.txt subfolder4 file24.txt subfolder4 file28.txt subfolder4 file30.txt subfolder4file78.txt subfolder5 file37.txt subfolder5 file49.txt subfolder5 file88.txt subfolder5That's what I am trying to achieve. Would there be a way to move these .txt files in their respective subfolders based on a terminal command like "mv" + read these rules from the CSV file mentioned above?(not sure if even possible) I tried something like this mv file1.txt,file3.txt,file8.txt* /subfolder1but seems counterproductive to do it manually for each without the rules :(
Move files in subfolders based on the specific rules from a file
Use a wildcard and remove the directory afterwards: mv /var/www/html/wordpress/* /var/www/html/ && rm -r /var/www/html/wordpress
How can I move the entire folder up one level in the directory structure ? Using rsync or mv does not quite work. rsync /var/www/html/wordpress/ /var/www/html/ skipping directory .mv /var/www/html/wordpress/ /var/www/html/ mv: '/var/www/html/wordpress/' and '/var/www/html/wordpress' are the same file
Move files up one directory
I don’t know any way to do it in one step, but the easiest way around the problem is to remove the problem. The fact that the two directories have the same name is a problem; so, rename one of them: mv foo foo2 && mv foo2/foo foo && rmdir foo2
Suppose that I have only the following in ~/foo: . .. fooWith file managers if I cut the subfolder foo and paste it into ~ it automatically replaces the contents of ~/foo with that of ~/foo/foo. But is there a native command-line tool to do so, although I can achieve the goal with a function, too?
How do I replace a folder with its only subfolder of the same name in CLI?
You can avoid the slow bash loop with something like this, which seems to works ok in my tests: $ tr '\n' '\0' <file1 |xargs -0 -I{} mv -vt path/to/deny {} #v for verbose. #OR $ cat file1 |xargs -d'\n' -I{} mv -vit path/to/deny {} # set delimiter to new lineFor a dry run you can make a test like this cat file1 |xargs -d'\n' -I{} echo "mv -vt path/to/deny " {}PS: My mv command in RHEL & Debian does not recognize the -R option in mv. One pitfall of this solution is if the directory names in your file include newlines as part of their dirname. In all other cases (i.e dir names with spaces) both versions tested and work fine. If you want to do it with a loop you could speed things up by avoiding calling mv for each line read by your file. You could "load" all the lines/directories in an array and call mv afterwards, like: $ while IFS= read -r dir; do folders+=("$dir");done < list.txt $ mv -t path/to/Deny_folder -- "${folders[@]}" #-R is not available in Red Hat and DebianOr even make a kind of mv grouping: while IFS= read -r dir; do let "a++" folders+=("$dir") [ "$a" -gt 1000 ] && mv -vt path/to/Deny_folder -- "${folders[@]}" && a=1 && unset folders done < list.txt
I have 3 massive folders containing lots of other folders that I need to give access to a third party for downloading via SFTP. At the moment every folder in the main directory is set for download rights for SFTP so my idea is to make a list.txt containing the files that the user can not access and set the permissions to something? Or move these files to another folder? The folder in question will have over 2000 folders containing million of files over 500GB and I need to remove access to half of them Example folder list (1) some test (2) more test1. PLANT Madrid Two2013 Folio ltd2014-27201-07-983M3M 4M 5M3M Comp LTD5028 - Video6398SRTTGDSI was thinking something along the lines of a bash script that would even move the files to a new folder or change permissions. Any thoughts on what would be best with the amount of data, folders and user will be used SFTP to download the other folders? while IFS= read -r dir; do mv -t path/to/Deny_folder -R -- "$dir" done < list.txtor while IFS= read -r dir; do chown 700 "$dir" done < list.txt
Change permission of folder based on list.txt
First of all, I can’t reproduce the results you claim for the command you showed. I got the files being renamed to another directory_file1.jpg, another directory_file2.jpg, etc., but still under the some directory directories. Secondly, because of the depth of your directory structure, you should be using -mindepth4 instead of 2. Thirdly, I strongly encourage you to use -typef. As long as you’re using -name'*.jpg', you probably won’t find any directories. But six to eight weeks from now, you’ll look at this and think “I want this to apply to all my files — I don’t need to say -name'*.jpg',” and you’ll take it out. And then, if you don’t have -typef, the find command might start finding directories and renaming them. And modifying a directory tree while you’re scanning it is a recipe for disaster (like the proverbial “flying the airplane while we’re still building it”). Fourthly, the rename command to move a file up three levels and prefix its name with Directory<num>_ is rename -n 's!/[^/]+/[^/]+/([^/]+)$!_$1!' (optionally) because /[^/]+ represents a directory level, so I just added two more copies of that. Warning: This will, as I said, move a file up three levels. If you have any files deeper in the directory tree than that, they will not be renamed to the top level; they may be renamed to something like Directory3/somedirectory_file3.jpg. But it turns out that that’s easy to fix. To move a file to the top level from whatever depth it is at, and prefix its name with Directory<num>_, just use s!(\./[^/]+)/.*/([^/]+)$/$1_$2/!TL;DR So, The final command is find . -mindepth 4 -type f -exec rename -n 's!/[^/]+/[^/]+/([^/]+)$!_$1!' {} +Add -name'*.jpg' if you want. Delete -n when the trace output shows you that it’s doing the right thing. Beware that this has the potential to have filename collisions. If you have files Directory 1/some directory/another directory/sunrise.jpgand Directory 1/some directory/another directory/mountains/sunrise.jpgthis command will try to rename them both to Directory1_sunrise.jpg. Luckily the rename program seems to be smart/friendly enough to do no harm in this case. The first such file that it sees will (probably) be renamed; the others will be left where they are (and you will be notified). (You can use -f or --force to force overwriting existing files.) But remember that some other programs (e.g.,mv) have less friendly default actions.For the updated question ("How can I include all directory names in the file names?"): this is a little trickier. I'll explicitly assume that we are searching ., so all the pathnames will begin with ./. And I've discovered that the perlexpr argument to rename can actually contain multiple string-modification commands. So we can move all files to the top level and prefix their names with all the directory names in the path with s!^\./!!;s!/!_!gThis also has the possibility of filename collisions. Directory 1/abc/def_ghi/sunrise.jpgand Directory 1/abc_def/ghi/sunrise.jpgwill (potentially) both be renamed to Directory1_abc_def_ghi_sunrise.jpg.Disclaimer: I have tested this, but not at all thoroughly. I do not guarantee that this will work perfectly. You should make a complete backup before trying the above commands.
I have a directory with the following layout (the layout in Directory 1 is repeated in every other Directory<num>): Parent directory Directory 1 some directory another directory <many files> Directory 2 ︙ Directory 3 Directory 4 I'd like to rename the files by prefixing them with the Directory <num> and moving them up 3 directories so that they are under the Parent directory and have the original (now empty) directories deleted like: Parent directory Directory 1_<many files> Directory 2 ︙ Directory 3 Directory 4 How could I do that? The following from a similar question find . -mindepth 2 -name '*.jpg' -exec rename -n 's!/([^/]+)$!_$1!' {} +renames the files to the 1st parent directory: Parent directory Directory 1 another directory_<many files> Directory 2 ︙
Move files several directories up for several directories with similar layout
You can't have two files by the same name at the same time, so you'll need to first create the directory under a temporary name, then move the file into it, then rename the directory. Or alternatively rename the file to a temporary name, create the directory, and finally move the file. I see that Nautilus scripts can be written in any language. You can do this with the most pervasive scripting language, /bin/sh. #!/bin/sh set -e for file do case "$file" in */*) TMPDIR="${file%/*}"; file="${file##*/}";; *) TMPDIR=".";; esac temp="$(mktemp -d)" mv -- "$file" "$temp" mv -- "$temp" "$TMPDIR/$file" doneExplanations:set -e aborts the script on error. The for loop iterates over the arguments of the script. The case block sets TMPDIR to the directory containing the file. It works whether the argument contains a base name or a file path with a directory part. mktemp -d creates a directory with a random name in $TMPDIR. First I move the file to the temporary directory, then I rename the directory. This way, if the operation is interrupted in the middle, the file still has its desired name (whereas in the rename-file-to-temp approach there's a point in time when the file has the wrong name).If you want to remove the file's extension from the directory, change the last mv call to mv -- "$temp" "$TMPDIR/${file%.*}"${file%.*} takes the value of file and removes the suffix that matches .*. If the file has no extension, the name is left unchanged.
How can I create a Nautilus script that moves a the selected file into a new folder with the same name? My starting point: /home/user/123 here 123 is a file with no extension My goal here is to achieve this result: /home/user/123/123 here we have the same file 123 inside new folder also named 123 I can't figure this out, because every atempt I've made gave me the result: mkdir: cannot create directory `123': File exists
Nautilus-script to move file into same name directory
Assuming you don't have FOLDER.DUPLICATE.$DRIVEBENDER directories inside other FOLDER.DUPLICATE.$DRIVEBENDER directories, you could do something like: find . -path '*/FOLDER.DUPLICATE.$DRIVEBENDER/*' -prune -type f -print0 | perl -0lne ' if (m{(.*)/FOLDER.DUPLICATE.\$DRIVEBENDER/(.*)}s) { $upperfile = "$1/$2"; if (-s > -s $upperfile) { rename $_, $upperfile or warn "rename $_: $!\n"; } else { unlink $_ or warn "unlink $_: $!\n"; } }'(if your find doesn't support -print0, you can replace with -exec printf '%s\0' {} +).
Due to an old backup concept I have some hard drives here, that contain file structures like: /1.1 /2.1 /3.1 /FOLDER.DUPLICATE.$DRIVEBENDER/1.1 /FOLDER.DUPLICATE.$DRIVEBENDER/3.1 /FOLDER.DUPLICATE.$DRIVEBENDER/4.1 /Subfolder/1.2 /Subfolder/FOLDER.DUPLICATE.$DRIVEBENDER/2.2 /Subfolder/FOLDER.DUPLICATE.$DRIVEBENDER/3.2The result should equal to the original structure, so all files in a folder called FOLDER.DUPLICATE.$DRIVEBENDER should be moved one level higher. If the file one level higher exists, the bigger file should win.
If foldername equals "somestring", move all files in the folder one level up
There isn't a command to do this. However, it's a straightforward piece of scripting.Work out how to identify a file by its contents and move it: f=example_file.txt b=$(head -c1 <"$f") [ "$b" = "0" ] && echo mv -- "$f" zero/Work out how to iterate across all the 100,000 files in the directory: find . -maxdepth 1 -type f -printOr maybe your shell allows you to use a wildcard for a large number of entries and the simpler more obvious loop will work: for f in * do echo "./$f" doneWork out how to run the mv code for each possible file: find -maxdepth 1 -type f -exec sh -c ' b=$(head -c1 "$1") [ "$b" = "0" ] && echo mv -- "$1" zero/ ' _ {} \;Or for f in * do [ "$(head -c1 "$f")" = "0" ] && echo mv -- "$f" zero/ doneOptimise the find version: find -maxdepth 1 -type f -exec sh -c ' for f in "$@" do [ "$(head -c1 "$f")" = "0" ] && echo mv -- "$f" zero/ done ' _ {} +In all cases remove echo to change the command from telling you what it would do, to doing it.
Here's a command to move all files whose name begin with 0 into a folder called zero : mv [0]* zeroQuestion: What is a command for moving all files whose contents begin with 0 into a folder called zero? Hopefully, there is a short command doing that also. I know that the first character of the contents of a file is given by head -c 1 filename.
How to move all files whose contents begin with 0?
This should do it: find /path/to/base/folder/ -type d -name 'sub*' -exec bash -c 'mv {}/* "$(dirname {})"' \;NOTE: this will not move hidden files (whose name start with .)
I have a filestructure with several subfolders where I'd like to search for all subfolder containing a certain string ("sub*") and then move all of the files in these found folders up one level from each of their respective location. And even potentially delete the then empty folder but I could do that with a second step as well.
Move Files from Directory up one level
Remove the tilde from /home/userb/public_html/. The tilde expands to the home directory of the user which in this case is root. As a result, you get: /root/home/userb/public_html/As per the error message, that directory doesn't exist. What you want instead is this: mv -v ~/public_html/* /home/userb/public_html/As far as changing the permissions and rights afterwards is concerned, that depends on what they are and what you want them to be. If you want userb to be able to read and edit the files and directories, for example, then you need to make userb the owner which can be done with: chown -R userb:userb /home/userb/public_html/
I have downloaded files from another server to root directory called public_html, Now I am looking for move all files and folders from public_html to /home/userb/public_html so I am trying to move it like below command mv -v ~/public_html/* ~/home/userb/public_html/but its giving me error called mv: target '/root/home/userb/public_html/' is not a directoryLet me know how I can do it as well its need change permission after move? Thanks!
Move Files From Root Directory to another users Home Directory
Gnu find find dir -mindepth 2 -maxdepth 2 -type f -execdir sh -c 'mv -t ./*/ "$1"' find-sh {} \;find dir \ -mindepth 2 -maxdepth 2 -type f \ -execdir sh -c ' mv -t ./*/ "$1" ' find-sh {} \;original directory structure dir ├── dirA/ │ ├── fileA │ └── subdir/ │ ├── e │ ├── q │ └── w └── dirB/ ├── fileB └── subdir/ ├── c ├── x └── zAfter the move operation dir ├── dirA/ │ └── subdir/ │ ├── e │ ├── fileA │ ├── q │ └── w └── dirB/ └── subdir/ ├── c ├── fileB ├── x └── z
I have a directory structure like this; dir ├── dirA │ └── file1 │ └── subdir └── dirB └── file2 └── subdirI need to move file1 to dirA/subdir and file2 to dirB/subdir. How can I do it in Linux?
Move multiple files to subdirectories in linux
jdir0="$@""$@" expands to all positional parameters / arguments to the script. When you assign it to a single variable, Bash concatenates them joined with spaces. You probably want just $1 here. mv "$jdir0/subs/*.srt"The variable expansion should indeed be quoted, since it prevents word splitting, and globbing if the variable contains something that looks like a glob. But here, you probably want the hard-coded glob to work, so it should not be quoted. So leave the asterisk outside quotes, with e.g. mv "$jdir0"/subs/*.srtor e.g. mv "$jdir0/subs/"*.srtBoth work the same, use which ever variant looks nicer to you.
Ohhhh, I've read a few pages and questions, but I just can't comprehend/fully understand it... jdir0="$@" # /home/tor/subbackup/teest2/jjj mv "$jdir0/subs/*.srt" /home/tor/subbackup/mv: cannot stat '/home/tor/subbackup/teest2/jjj/*.srt': No such file or directory Well, duh, yes, there is one test2.srt in there.. I've seen a few pages with tons of different solutions, and a as I have understood it, this should be translated to(the first move command): mv /home/tor/subbackup/teest2/jjj/subs/test2.srt /home/tor/subbackup/This one works fine in the terminal.. (The first move command), but I can't get it to work in the script.. what am I doing wrong?
Move files with wildcard?
Try this, for f in /volume1/video/*; do # skip over directories [ -f "$f" ] || continue # grep the date in YYMMDD format date=$(printf '%s' "$f" | grep -Eo '[0-9]{6}') # set target path using date to convert YYMMDD to YYYY-MM-DD(%a) target="/volume1/video/daily/$(date -d "$date" +%Y-%m-%d\(%a\))/" # mv the file echo mv "$f" "$target" doneRemove the echo when it's working. Maybe it would be good to add a check to see if $date is not empty. You could add mkdir -p "$target" before the mv to avoid errors on missing directories. Note: this code has a millenium bug ;-)
I looked for similar questions, but it was hard to get a clear answer in my case. I use my synology DS1515+ and DSM 6.2.2. First of all, I have been made daily folder containing a date using script as below; mkdir /volume1/video/$(date +%Y-%m-%d\(%a\)) This script is executed at daily midnight. So every midnight, these folders are made. (It means destination folder is always already existed before copying.) And some video files are downloaded. Those filenames contain date. For example, ABCDABCD.200328.avi or EFGHIJKH.200327.1080p.mp4 Filenames don't have certain rules but date like YYMMDD type is included in every filenames. I'd like to copy these files to folders which is including same date. (folder what is made automatically as above I explain)[Location of directory] Path of files what I wanna copy : /volume1/video/ Example of Destination directory : /volume1/video/daily/2020-04-06(Mon) Could you help or explain how to do it? If you explain to me, please include path of directory in my case as above. (because I can't apply the code what you recommend to me for lack of my understanding. I apologize) Thank you very much again. Have a good day.
Copy a file containing date in the file name to a folder for that date
I think you can achieve your goal with following command: find Pictures -type f -print0 | xargs -0 mv -t Picturesnewwhere Pictures is source directory and Picturesnew is destination directory. Both mv (and cp) have format mv -t directory source... which fits good in use with xargs. But this method will leave files with duplicate names in their original positions (maybe that's not bad since you can review them and copy to the desitnation later), since mv -t won't work interactively when input redirection is used. That is because xargs already reads values from redirected standard input (which is output of find), and mv, which is ran by xargs, tries to read answers from the same standard input too. So you cannot use mv with pipes in interactive fashion. Maybe job can be solved by some clever input-handle reassigning, but let's try more simple way: get rid of output redirection. Also we cannot use -print0 in command substitution because it doesn't allow 0-bytes. We need to iterate over filenames with for, using newline as input-field separator: TMP=$IFS; IFS=$'\n'; for i in $(find Pictures -type f); do mv -it ../Picturesnew $i; done; IFS=$TMP(you may copy it as a one-liner). In this case, mv will not only ask you for each duplicate, but wait for answer as well.
I wanted to recursively move files from a folder (Pictures) to another (Picturesnew). The "Pictures" Folder had many subfolders and and hence I used this command after following up the posts here. Both Pictures and Picturesnew were in the same directory. I just wanted to get rid of all the subfolders and combine the data. I ran the following command from the directory these folders were situated in. find ./Pictures -type f -name "*.jpg" -print0 | xargs -0 -Imysongs mv -i mysongs ./PicturesnewNow seemingly the Picturesnew folder which should have appeared didn't appear at all and hence I am confused as to where 20000 JPG files of mine went.
Recursively moving contents of directory
You must be moving between two different filesystems, so in effect the file is copied. Try to first copy it then, and after that's done, move within the destination. This should do: mv /usr/tmp/abc.txt /usr/data/.abc.txt && mv /usr/data/.abc.txt /usr/data/abc.txtI assume your watching process won't recognise the hidden file. Otherwise you could make a temp directory at the target location or something similar.
I am trying to copy/move a large file (15 GB) to a directory in Linux and want to have a dependency on that event. Now lets say I have a file named abc.txt, and I am running below command: mv /usr/tmp/abc.txt /usr/data/When the move process start I see a file in data directory with the actual file name i.e. abc.txt but with data still being in transit. As the data directory list the file abc.txt in its directory my dependent process thinks that the file is available and it start the dependent process however the file is not completely moved and hence my dependent process triggers prematurely. Is there a way I can move a file with transient name i.e. while the data transfer is going on it will use a transient name(some swap file name) and change the name to actual file when it is completely transferred?
Copy or move large file with transient name till the file is completely transfered to destination in linux
Using a shell loop and calling find once for each filename: mkdir -p archive_dir while IFS= read -r filename; do find maindir -type f -name "$filename" -exec mv {} archive_dir ';' done <listfile.txtThis would be slightly inefficient since it would continue looking for matching filenames even after finding the file (and if it found another one, it would overwrite the one already moved). If using GNU find, you may add -quit to the very end of the find command to make the find process stop after the first file has been moved. Showing it works: $ cat listfile.txt filename1.txt filename2.txt. |-- listfile.txt `-- maindir |-- subdir1 |-- subdir2 | `-- filename1.txt `-- subdir3 `-- filename2.txt4 directories, 3 files(running the above loop) Then: . |-- archive_dir | |-- filename1.txt | `-- filename2.txt |-- listfile.txt `-- maindir |-- subdir1 |-- subdir2 `-- subdir35 directories, 3 filesRelated:Understanding the -exec option of `find`
I have a list file with names of files, i want read one file name from the list at a time and look for it under a directory structure with multiple sub folders and then once found move it into a diff folder. Ex: listfile.txtContent of the file-- filename1.txt filename2.txtmaindir |--subdir1 |---subdir2/filename1.txt |---subdir3/filename2.txtread file names from listfile.txt one by one and move them to a diff folder say /destfolder. Any suggestion would be great. Thanks, Kavin
Find from a file list of and move in Unix
Using find and a shell script to convert the paths to relative paths to the start directory and replacing / with _: find . -type f -iname 'file*.txt' -exec sh -c ' targetdir=$1; shift for file; do cp "$file" "$targetdir/$(realpath --relative-base=. "$file" | tr '/' '_')" # uncomment to restrict the filename to the last 4 directories #cp "$file" "$targetdir/$(realpath --relative-base=. "$file" | # rev | cut -d'/' -f-5 | rev | tr '/' '_')" done ' sh /tmp/dest {} +Replace /tmp/dest with your target directory and cd to the parent directory of your files before running the command. Example input directory structure: . ├── dir1 │ ├── file.txt │ └── sub1 │ └── file.txt ├── dir2 │ └── file.txt └── file.txtOutput in /tmp/dest: dir1_file.txt dir1_sub1_file.txt dir2_file.txt file.txt
I have thousands of text files called file.txt. Because they all share the same name I cannot move them to the same folder without them overwriting. I need a command that will locate all of the file.txt files that are within thousands of directories and subdirectories, append the full path name to the end of the file name and then copied to a specified folder leaving the original where it was. example: from: file.txt to: a/123/file.txt(i know / cant be used in a file name so hyphens or underscores will work, unless there is a visually more appropriate replacement) Also, the command would need to ignore case as some are file.txt and some are File.txt as well have an optional 'S' on the end (file.txt or files.txt). During the renaming, I am hoping the path can be appended as is, without changing case as the directory/subdirectory names are randomized (example: a/adgDGeRddsdvvsdGSD/[fF]ile[s].txt Thanks!
Command to gather all specified text files, rename them, and copy them to a folder
Use a wildcard mv 1660649147?????????.jpg frames2/Depending on whether or not you mean to include the upper limit, maybe also mv 1660649148000000000.jpg frames2/If there are too many files for the wildcard to match without running out of buffer space, use find instead: find . -name 'frames2' -prune -o -name '1660649147?????????.jpg' -exec mv -t frames2/ {} +NotesThe -name 'frames2' -prune clause prevents find descending into the frames2 subdirectory. You don't need it (or the -o "or") if frames2 is actually elsewhere. If your mv does not have the GNU extension -t, change the -exec clause to -exec mv {} frames2/ \;, but realise it will take considerably longer to complete.In all cases, you can amend mv to echo mv to see the set of chosen files without actually moving them.
So I have files that need to remove/move/filter. All files have this pattern like this in a directory, let's say directory frames contain this pattern filename timestamp_in_nanosecond.jpg. And this is sample of that files with using pipelined tail from ls. ls | tail. (I'm using tail because ls too slow, maybe because too many files to list). .../uwc/frames $ ls | tail 1660649146201561661.jpg 1660649146411875151.jpg 1660649146622526505.jpg 1660649146832063432.jpg 1660649147042957234.jpg 1660649147254488848.jpg 1660649147466753015.jpg 1660649147889093171.jpg 1660649148193314525.jpg 1660649148786199681.jpgWhat if I want move files to another directory like frames2 with specified range like this range: From 1660649147000000000.jpg Until 1660649148000000000.jpgHence, frames2 dir will contain these files only: 1660649147042957234.jpg 1660649147254488848.jpg 1660649147466753015.jpg 1660649147889093171.jpg
Remove/move files in a directory with filename timestamp pattern
You don't have to do anything special. Simply, mv /mnt/X/* to /mnt/X/backups/(You will get an error about not being able to move backups to itself). A hard link is basically an inode number. Files that are hard-linked have the same inode number. However you move them around within the same file-system, the inode number does not change. So there is no special action needed. Try it for yourself with some simple files in /tmp first: /tmp $ mkdir aa /tmp $ touch aa/f /tmp $ ln aa/f aa/g /tmp $ mkdir aa/new /tmp $ mv aa/* aa/new mv: cannot move 'aa/new' to a subdirectory of itself, 'aa/new/new' /tmp $ ls -il aa/new/ 13185910 -rw-r--r-- 2 0 Apr 11 13:32 f 13185910 -rw-r--r-- 2 0 Apr 11 13:32 g
I have a daily rsync script backing up my data on an external hard drive at /mnt/X (root of hard drive). I am using --link-dest to use hard links and avoid duplicating data. I need to move all my daily backups from /mnt/X to /mnt/X/backups without loosing the hard links. Later I will need to change the script to backup in the new dest directory which is /mnt/X/backups and look for previous day backup in the same directory. How would you suggest me to do the move?
Move daily backup directories (made by rsync) to another directory in the same partition
awk 'NR > 1 { print $1, $2 }' list.csv | while read -r prefix group; do find . -type f -name "$prefix*" -exec sh -c ' group="$1"; shift mkdir -p "$group" for name do echo mv "$name" "$group"; done' sh "$group" {} + doneThis would use awk to feed a while loop with the prefixes and group names (skipping the file header of the list file). This assumes that all prefixes and group names have no spaces or tabs in them. The while loop calls find to find all regular files in or below the current directory, that have names beginning with the given prefix. For all such files, the following short shell script is called: group="$1" shiftmkdir -p "$group" for name do echo mv "$name" "$group" doneThis script expects the group name to be the first argument on the command line, and the rest of the arguments to be pathnames of files to move to that group directory. The script creates a group directory in the current working directory if it does not already exist, and then loops over the given pathnames, moving each file into place. No check is done for whether files are overwritten. The echo protects the mv from actually running. Run the code with echo in place to make sure it works, then remove the echo.
I have thousands of files with various prefixes, and those prefixes are grouped in a csv list. Now I want to move all of them to their folder according to their list. For instance: file A1B1C1_{...}.png, A1B2C2_{...}.png, A1B1C3_{...}.png, etc. CSV List: Name Group A1B1C1 John A2B1C1 John A1B2C2 Denver A1B1C3 NickNow I want to move all files with A1B1C1_ and A2B1C1_ prefix to John folder, A1B2C2_ prefix to Denver folder, A1B1C3_ prefix to Nick folder. I'm thinking about for group in *_*.csv; do {...}But I'm not sure about how to read the list from a csv file as well as the for moving files from a list syntax. I'm working on CentOS, thank you very much.
Move all files with matching prefixes to folder based on a csv list
*'('*')'.* should work. Better yet, *'('????')'.* should get only the names that have four characters between the parentheses. Parentheses are special characters, so you have to put them in quotes.
I want to selectivly move all files that are movies with the year at the end of the filename. I have a program that renames movies and fixes the format which always uses Moviename (YEAR).extension format. I tried mv *(*).* but it just moved everything. I figure the best way would be to do something like move files with brackets and 4 characters in brackets. Also/Or the brackets will always come before the .extension.
Move files containing Brackets and Year eg (1999)
It may be an NTFS problem. Look here. Try: sudo aptitude update && aptitude install ntfs-3gumount /media/root/1610D6B410D699D7mount -t ntfs-3g /dev/sda5 /media/root/1610D6B410D699D7chmod +w /media/root/1610D6B410D699D7 chmod u+w /media/root/1610D6B410D699D7then move it.
There are two OSes on my PC : Debian8.1 and Win7. The Win7 was mounted on /media/root/1610D6B410D699D7 when Debian8.1 was loaded.I then tried to move a Debian iso without success. Why can't I move the iso file into /media/root/1610D6B410D699D7? mv /home/debian8.1.iso /media/root/1610D6B410D699D7 mv: cannot create regular file ‘/media/root/1610D6B410D699D7/debian8.1.iso’: Permission deniedIt is RW .root@debian:~# cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=250719,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=404492k,mode=755 0 0 /dev/sda7 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 debugfs /sys/kernel/debug debugfs rw,relatime 0 0 mqueue /dev/mqueue mqueue rw,relatime 0 0 hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0 /dev/sda9 /windows vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=utf8,shortname=mixed,utf8,errors=remount-ro 0 0 rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0 tmpfs /run/user/109 tmpfs rw,nosuid,nodev,relatime,size=202248k,mode=700,uid=109,gid=117 0 0 tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=202248k,mode=700 0 0 gvfsd-fuse /run/user/0/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0 /dev/sda5 /media/root/1610D6B410D699D7 ntfs rw,nosuid,nodev,relatime,uid=0,gid=0,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1 0 0 /dev/sda8 /media/root/fb2c100d-434c-4c37-a164-f0d000bc522a ext4 rw,nosuid,nodev,relatime,data=ordered 0 0
Why can't I move a file into an NTFS directory mounted RW?
First, install the perl rename utility. You seem to be using a Mac, so you'll probably need to use brew, it has perl rename packaged - see https://formulae.brew.sh/formula/rename. Once that's installed, you should be able to run something like: $ find A/ -regex '.*_eeg.\(eeg\|vhdr\|vmrk\|json\)' \ -exec rename -n 's=^.*/(sub-[^_]*)_=B/$1/eeg/$1_=' {} +NOTE: I'm not sure exactly which version of find is installed on Macs these days. I'm assuming it's a FreeBSD's version of find and has a -regex predicate. If not, use brew to install GNU find and use that instead of the default Mac find. Without -regex you could do it by OR-ing several -name predicates together (in parentheses to force precedence): find A/ \( -name '*_eeg.eeg' -o -name '*_eeg.vhdr' -o -name '*_eeg.vmrk' -o -name '*_eeg.json' \) -exec rename ...`Also note that rename's -n option makes it a dry run, so it will only show what it would do without actually renaming any files. Remove the -n, or replace it with -v for verbose output, when you've confirmed that it does what you want. Modify the rename script if it's not quite right until it works as you want it. BTW, this assumes that B/subject/ and B/subject/eeg/ already exist. If they don't and you want the rename script to create the directories before renaming the files, insert the following immediately before the s=== substitution operation. if (m=^.*/(sub-[^_]*)_=) { mkdir "B/$1"; mkdir "B/$1/eeg" };Sample run (with only one filename because I couldn't be bothered typing in more filenames from your pictures of text for the touch command, but it would work with the .vhdr, .vmrk, and .json files too if they existed in my A/ directory): $ mkdir -p A B/sub-CDPC0001/eeg/ $ touch A/sub-CDPC0001_ses-01_task-rest_eeg.eeg$ find A/ -regex '.*_eeg.\(eeg\|vhdr\|vmrk\|json\)' \ -exec rename -v 's=^.*/(sub-[^_]*)_=B/$1/eeg/$1_=' {} + A/sub-CDPC0001_ses-01_task-rest_eeg.eeg renamed as B/sub-CDPC0001/eeg/sub-CDPC0001_ses-01_task-rest_eeg.eeg
I have this parent directory X and these two subdirectories A and B (see Photo 1): Folder A contains files for specific subjects that I want to move to further corresponding folders within Folder B. For example, in folder A, I have a lot of files for different subjects that end with different extensions (eeg.eeg, eeg.json, eeg.vhdr, and eeg.vmrk) (see Photo 2). I want to take all these files for each subject and move them to subfolders of B, where there is a subfolder for all subjects: B>>"subject_id">>ses-t1>>eeg. The destination folders should be the "eeg" folders in each participant's folder. . The desired result should be as is depicted in the photo below (Photo 3):The good thing is, I think, that the starting name of the files in folder A, correspond with the subject subfolder name. For example, the file starts with sub-CDPC0001_ses-01_task-rest_eeg.eeg, which is similar to the subject folder for this particular subject within folder B (sub_CDPC0001) In the past, there has been a mini script used for this, but I don't think works properly on this arrangement right now: for dir in $(ls -d */ses-t1); do find "./$dir/" -name '*_eeg.eeg' -exec mv {} "$dir/eeg" \; find "./$dir/" -name '*_eeg.vhdr' -exec mv {} "$dir/eeg" \; find "./$dir/" -name '*_eeg.vmrk' -exec mv {} "$dir/eeg" \; find "./$dir/" -name '*_eeg.json' -exec mv {} "$dir/eeg" \; #find "./$dir/" -name '*_anat.nii.gz' -exec mv {} "$dir/t1" \; doneThis lists all the files from folder A but fails to move them to the desired eeg folder. I would appreciate an answer very much, as there are a lot of data, and moving them manually is prone to error and very tedious.
How to move files based on name correspondence with a folder?
We use GNU find utility, which traverses the directory tree and gathers information along the way. -depth option will make find do a depth-first traversal of the tree and we come up directory names that store exactly one regular file. Then we move that file up abd tgen delete the just emptied directory. find . -depth ! -name . -type d -execdir \ sh -c ' isFileKnt_1() { test "$(cd "$1" && find . -maxdepth 1 -type f | grep -c /)" -eq 1 } for d do t=$(mktemp -d) isFileKnt_1 "$d" || exit 0 mv "$d"/* "$t"/. rmdir "$d" mv "$t"/* . done ' find-sh {} +tree -F outputs before and after the operation. . ├── dirB/ │ └── dirB ├── dirC/ │ ├── file_2 │ └── file_3 └── dirD/ └── dirD. ├── dirB ├── dirC/ │ ├── file_2 │ └── file_3 └── dirD
So I have multiple subdirectories in which many of them have only 1 file and the name of the filename is the same as the subdirectory name. DIR A --DIR B ----B.zip --DIR C ----C2.zip ----C3.zip --DIR D ----D.zipSo the ideal result should be:both B.zip and D.zip moved to DIR A DIR B and DIR D is now empty and need to be removed, while DIR C is left alone because it contained more than 1 fileIs it something possible to do or I need to write special programming code? Thanks
How to search subdirectories that has only 1 file inside and move the file up one step to the parent directory
#! /bin/bashshopt -s globstar #enabled for '**' to match all files &directories recursively #shopt -s dotglob #uncomment to enable to match on hidden files/directories toocd /path/to/directory/Root for pathname in ./**/*; do [[ -f "$pathname" ]] && echo mv -v -- "$pathname" "${pathname%%/*}/${pathname%%/*}.${pathname##*.}"; done##then remove remained empty directories for pathname in ./**/*; do [[ -d "$pathname" && -z "$(ls -A -- "$pathname")" ]] && rm -r -- "$pathname"; done[[ -f "$pathname" ]] checks if the $pathname is a file ${pathname%%/*}: Using shell-parameter-expansion, cut the longest suffix from the pathname parameter. cuts everything up-to first slash / character. ${pathname##*.}": same, but this cuts the longest prefix from the pathname parameter; cuts everything up-to last dot . character. [[ -d "$pathname" ]] checks if the $pathname is a directory ... && -z "$(ls -A -- "$pathname")" then checks if basename of the pathname which was a directory, is empty or not.remove echo when you were happy with the result.
I have a problem where I need to rename just certain files after the parent folder and afterwards move these to a central folder. Is there a way to do this? I would like to run this on a Synology NAS. Root |-Subf1 | |-File.txt | |-File.doc | |-Subf1subf1 | | |-File.xml | | |-File.xls | |-Subf1subf2 | | |-File.pptx | | |-File.docx | |-Subf2 | |-File.txt | |-File.doc | |-Subf2subf1 | | |-File.xml | | |-File.xlsResult should be: Root |-Subf1 | |-Subf1.txt | |-Subf1.doc | |-Subf1.xml | |-Subf1.xls | |-Subf1.pptx | |-Subf1.docx | |-Subf2 | |-Subf2.txt | |-Subf2.doc | |-Subf2.xml | |-Subf2.xlsThere is no problem with overwriting files as they all have different extensions.
Rename files in any deep subfolder like the parent folder and move after to a central folder
Try this: # Create dir2: mkdir dir2 # After moving f* to dir1, loop through these files (dir1/f*): for f in dir1/f*; do # get the basename, cut off the "f" and put a "t" instead: t=source-tree/t$(basename "$f" | cut -c 2-) # If that file exists, move it to dir2 [ -f "$t" ] && mv "$t" dir2 done
I have a directory structure which contains many files. I want to find similar files recursively and sort them based on their names. The easy part: find all files named f*.ext and move them to a certain directory, let's say dir1. But now I want to go through the tree again and find all t*.ext which have a match in dir1. For example, for dir1/f12345.jpg find the corresponding source-tree/t12345.jpg (if it exists) and move it to dir2. In the end, every for every dir2/t*.ext there should be a dir1/f*.ext. All source-tree/t*.ext who don't have a dir1/f*.ext should remain where they are.
Sort files based on similarity of file names
for dir in */*; do if [[ -d "$dir" ]]; then ( cd "$dir" mv -n * .. cd .. rmdir "$( basename "$dir" )" ) fi doneBe aware that any duplicated file or subdirectory names will not be moved and so the deep directories in those cases will not be removed, due to still containing files.
The problem I am facing is that I have a directory that contains thousands of subdirectories, each of those subdirectories contain more subdirectories, and inside of all of those are images. What I have are thousands of these: /1056/7624/image.png I basically want to eliminate the 7624 directory here so that I end up with thousands of these instead: /1056/image.png I tried mv */*/* */* but that just freaked out... is this even possible to do with a terminal command? I'm trying to do this so I can use this multifile uploader without going into 50 directories just to grab 50 images.
Move contents of all sub subdirectories up into just their subdirectories
shorter with rename (it will fail in case of different Filesystems): rename '' "/dev/DataStage/myProject/Archive/TEST/`date +%Y%m%d_%H%M`." /dev/DataStage/myProject/source/TEST/MyFile_*.csv with loop for file in /dev/DataStage/myProject/source/TEST/MyFile_*.csv ; do filename=`basename $file` mv $file /dev/DataStage/myProject/Archive/TEST/`date +%Y%m%d_%H%M`.${filename} done
I need to move a file to an archive folder, and add a timestamp in front of the file name. mv /dev/DataStage/myProject/source/TEST/MyFile_*.csv /dev/DataStage/myProject/Archive/TEST/MyFile_*.csvmoved MyFile_20180817.csv as My~1.csv instead of MyFile_20180817.csv When I move the file to the archive folder, I also need to add a time stamp in the front of the file name, for example: MyFile_20180817.csv to 20180817_1057.MyFile_20180817.csv: mv /dev/DataStage/myProject/source/TEST/MyFile_*.csv /dev/DataStage/myProject/Archive/TEST/`date +%Y%m%d_%H%M`.MyFile_*.csvmoved MyFile_20180817.csv as 201808~1.CSV instead of MyFile_20180817.csv Thank you.
Moving file with wildcard and add timestamp to the file name
With find: find . -type f -name '*.out' -exec grep -q 'PATTERN' {} ';' \ -exec sh -c 'cp "$1" "${1%.out}.gdx" /somewhere' sh {} ';'Alternatively: find . -type f -name '*.out' -exec grep -q 'PATTERN' {} ';' \ -exec sh -c 'for name do cp "$name" "${name%.out}.gdx" /somewhere; done' sh {} +This would find all files in the current folder or below, whose names end with .out. If an .out file has a line matching PATTERN, the .gdx file in the same directory, with the same name prefix as the .out file, will be copied to /somewhere together with the .out file. No test is done for whether there is already an existing directory entry under /somewhere with the same name as the files being copied, or whether the .gdx file actually exists to start with. See also:Understanding the -exec option of `find` Is it possible to use `find -exec sh -c` safely?
I want to search by file extension and by texts and then copy a binary file inside the same folder. For instance, I am in directory A and finally like to copy all *.gdx files (in B,C,D) to somewhere. A |-- B | |-- file1.out (a text file) | |-- file1.gdx (a binary file) | |-- C | |-- file2.out (a text file) | |-- file2.gdx (a binary file) | |-- D | |-- file3.out (a text file) | |-- file3.gdx (a binary file) Here is my code: cd 'find . -maxdepth 2 -name "*.out"|xargs grep "sometext"| awk -F'/' '{print $2}'|sort -u ' && ' find . -maxdepth 2 -name "*.gdx" -print0|xargs -0 cp -t /somewhere' The problem here, if first find captures multiple folders then copy only one *.gdx file from the first folder, not all *.gdx files from all folders. I believe it has to be done by loop, but don't know how to script.
Find directories by file extension and copying/moving somewherelse [closed]
This is what we start with: $ tree base/ base/ |-- ab | `-- 12 | `-- 13 | `-- 37 | |-- file1.txt | |-- file2.txt | `-- file3.txt |-- af | `-- f3 | `-- 45 | `-- 9e | |-- third1.txt | `-- third2.txt `-- cd `-- b8 `-- e2 `-- a1 |-- other1.txt `-- other52.txt12 directories, 7 filesFirst we add the new directories: $ find base -type d -mindepth 4 -maxdepth 4 -exec mkdir {}/extra_folder ';'We use both -mindepth 4 and -maxdepth 4 to create new directories on level four only. Without the -mindepth 4 we would get new directories on higher levels, and without -maxdepth 4 the new directories would themselves be filled with new directories until the pathnames became so long that find no longer was able to create more. The extra_folder directory is created with mkdir called from -exec. Now we have $ tree base/ base/ |-- ab | `-- 12 | `-- 13 | `-- 37 | |-- extra_folder | |-- file1.txt | |-- file2.txt | `-- file3.txt |-- af | `-- f3 | `-- 45 | `-- 9e | |-- extra_folder | |-- third1.txt | `-- third2.txt `-- cd `-- b8 `-- e2 `-- a1 |-- extra_folder |-- other1.txt `-- other52.txt15 directories, 7 filesThen we'll move the files down: $ find base -maxdepth 5 -type f -execdir mv {} extra_folder ';'This looks for any regular file in or under the base directory (I'm assuming there are files only on the lowest level) that are on level five. It then uses -execdir to run the mv command inside the directory where the found file is located ({} will be the basename of the found file). We end up with $ tree base/ base/ |-- ab | `-- 12 | `-- 13 | `-- 37 | `-- extra_folder | |-- file1.txt | |-- file2.txt | `-- file3.txt |-- af | `-- f3 | `-- 45 | `-- 9e | `-- extra_folder | |-- third1.txt | `-- third2.txt `-- cd `-- b8 `-- e2 `-- a1 `-- extra_folder |-- other1.txt `-- other52.txt15 directories, 7 filesIn one go: $ find base -type f \ -execdir sh -c '[ ! -d "$1" ] && mkdir "$1"; mv "$2" "$1"' sh 'extra_folder' {} ';'This finds all regular files and moves them into a directory called extra_folder regardless of where they are to start with. Running this command multiple time will move them further and further down. The mini-script that is called by -execdir: [ ! -d "$1" ] && mkdir "$1" mv "$2" "$1"This will be called with the folder name as $1 and with the filename as $2 and will create the folder if it doesn't exist and then move the file into it.
I have a LARGE amount of files in the following structure all files in /base/, then 4 folders with 2 "random" letters, and then a series of files related to each other. Example: /base/ab/12/13/37/file1.txt /base/ab/12/13/37/file2.txt /base/ab/12/13/37/file3.txt /base/cd/b8/e2/a1/other1.txt .... /base/cd/b8/e2/a1/other52.txt /base/af/f3/45/9e/third1.txt /base/af/f3/45/9e/third2.txtetc I want to keep most of the structure, but add one ADDITIONAL (extra_folder) folder at the end, in which my files belong. Such that the above is changed to: /base/ab/12/13/37/extra_folder/file1.txt /base/ab/12/13/37/extra_folder/file2.txt /base/ab/12/13/37/extra_folder/file3.txt /base/cd/b8/e2/a1/extra_folder/other1.txtI expect that I will need a shell script and the move command. Thank you very much.
How to move many different sub folders one level down?
Well, this is all kind of like when you go to your doctor and say "Doc! It hurts when I do this!" and he says "So don't do that! Problem solved!". rsync is for synchronizing directories but, from your description, you don't want to do that. You want to "unsync" two directories: you want the files to be in one, but not both of them. I take it that you can't modify the code generating the files on the other system since, if you could, you would simply have it run ftp or rcp or curl or some such as soon as the files are finished to push them over to the target machine from the source machine and then delete them. So working only from the target machine you're best off to just run a periodic job to sign on to the remote machine and copy and delete everything in ~/datadir. It'll save rsync's overhead of comparing the two directories: this being wasted effort since you don't care about the contents of ./localdir - it's always going to get the contents of ~/datadir pulled into it. Using rcp or scp is the simplest but if the only access you have on the remote system is rsync then run that in a cron job. This will cause a delay between file creation and transport of some few minutes depending on the periodicity of your job since, as you note, rsync doesn't run live. If you need immediate transport you'd have to run a file sharing server such as amule on the other machine but that's a lot of complexity and overhead to save a few minutes and you'd still have to sign on every so often to delete the files: something no file sharing utility is going to do for you. In all of this there is the lurking gremlin of How do you know you've got complete and uncorrupted files waiting for you on the other end? If you're just taking things from the directory in which the files are created then any number of things can result in your taking some fraction of a complete file over to your target machine. You could, for example, start the copy (or rsync or whatever) while the file is being written out by the code on the source machine. Or the creating program could just fall over due to some hardware problem, such as a full disk, part way through creation. So when doing this sort of thing I always have separate creation and transportation directories and then mv (not cp) files after successful creation from the creation to the transportation directory. I'm very paranoid about file corruption too so I always cook my own digest/checksum/manifest file for the source files as well on top of all the automatic low level stuff that guards against corruption.
I have a large compute job running on a remote machine that generates ~40 data files every ~20 minutes. I would like to pull the generated files from the remote machine to my local machine as soon as they are generated, and immediately delete them from the remote machine. I've gotten part of the way there using rsync --remove-source files user@remote:~/datadir/* ./localdir. However, this does not run rsync "live" i.e. if new files are added to datadir I need to re-run rsync. To my understanding, rsync first creates a list of files to copy, then goes through the list one by one. I am wondering, is there a way to update the list as new files are added to datadir, or some other way to move files from the remote machine to local as soon as they are generated?
rsync - update sync list while rsync is running
One line command here: find /path/to/dir1 -type f ! -mmin -10 -exec mv {} /path/to/dir2 \;Replace -10 with whatever you want, the rule is: +n for greater than n, -n for less than n, n for exactly n.
Replace the "10 minutes" with whatever value. Basically I only want to move the file if it's not growing any more. How can I do this on the command-line or a bash script? A solution that is easy to cron is preferred.Details:OS: CentOS What I have tried so far: nothing because I don't know where to even start What kind of files: any files in a directory
How can I move a file to another folder, but only if it hasn't been modified since 10 minutes?
use CRTL+Arrow Left or CRTL+Arrow Right
CentOS zsh terminal, the path just typed or copied is very deep, need to change a word in the middle. Ctrl-f or Ctrl-b can move cursor by one character. Is there a way to move cursor by word, like vim?
How can I move terminal cursor by word?
Your second command will work but you didn't specify the file name. If you want to move all of the files matching the pattern of 1995_.info and/or 1995_.dat find ./ -type f -name 1995*.info -exec mv {} /1995/info \; find ./ -type f -name 1995*.dat -exec mv {} /1995/dat \; If you want to do all years in one command: for y in {1995..1999}; do $(find Years -type f -name $y*.info -exec mv {} '/'$y'/info' \;); done for y in {1995..1999}; do $(find Years -type f -name $y*.dat -exec mv {} '/'$y'/dat' \;); done That will loop through the directories with each year and find the filenames and move them to their yearly directories. Sidenote: I tested and confirmed this in CentOS 7.4 but depending on the environment, you may need to escape the * so you can add a backslash in front of it: \*
I have a bunch of files in sub-directories that I want to move to a different directory. The directories are organized in one parent directory like this: Apr1995 Apr1996 Aug1995 etc... (month then year) so /Files/Apr1995 for example. the files are formatted like this 1995___.info or 1995___.dat I want to go into each sub directory and move files to a different directory where the sub directories are separated by year and then format. Something like this: OtherFiles/1995/info, OtherFiles/1995/dat, OtherFiles/1996/info, etc. The 1995 directory would have sub-directories named info and dat In the end I want this organization for example: Desktop/Files/Apr1995/1995__.info, Desktop/Files/Apr1995/1995__.dat, Desktop/OtherFiles/1995/info, Desktop/OtherFiles/1995/dat and so on I've tried quite a few options like a couple of one liners: for d in ./*/ ; do (cd "$d" && mv 1995*.info /1995/info); doneor find ./ -type f -execdir mv 1995*.info /1995/info {} \;I just get mv errors or it doesn't recognize that their are files like those. A shell script could help too. I'm kind of at a loss for something that seems relatively easy. Any help would be appreciated. EDIT: hopefully the added info can help
Need to loop through folder and move files to different directory? [closed]
On a Linux-based system or other one using GNU find you can use a loop something like this find -maxdepth 1 -type f -printf '%s\t%P\0' | sort -z -rn | ( # x is max files per directory; d is directory number; k is file counter x=500 d=1 k=1 while IFS=$'\t' read -r -d '' size path do printf "%d\t%d\t%s\n" $k $d "$path" # File nr, Directory nr, Filename echo "##" mkdir -p "/path/to/dir-$d" echo "##" mv -f "$path" "/path/to/dir-$d/${path##*/}" [[ $((k++)) -ge $x ]] && { k=1; ((d++)); } # Next directory done )Remove echo '##' from the two action lines in the loop when you are sure that they are going to do what you want them to do. Comment out the printf if you don't need a status report of what's going where.
I have a directory that contains 10,665 jpeg files. I want to move 500 files to a new directory, and 500 to the next directory, etc. The largest files must be moved first: 500-1 contains the 500 largest files, 500-2 the next largest 500 files, etc. The reason I want to do this is that I want to give the JPEGs to someone and the file manager hangs because there are so many in one directory.
Move every 500 files in new directory [closed]
This snipet of an script will do what you ask for several years: for i in $(seq 2010 2020); do mkdir -p "$i" && mv *"$i"*.pdf "$i" done
I am trying to achieve this using bash I have a directory of files, a sample to get the picture is below: January 2010 MA - C3 Edexcel.pdf January 2010 MS - C3 Edexcel.pdf January 2010 QP - C3 Edexcel.pdf January 2011 MA - C3 Edexcel.pdf January 2011 MS - C3 Edexcel.pdf January 2011 QP - C3 Edexcel.pdfI am looking for a command that will take the three files from each year and put it into a folder for that year, for example, the first three files should go into a folder called 2010 and the second group of three files should go into a folder called 2011. So what I am trying to do is mkdir 2010| mv *2010* 2010For every year in the file To be clear, the folder is much larger than what I showed, meaning doing it year by year would take too much time Is this possible?
Bash command to create a folder and move specific files into it
Resolved: I had to delete the folder /apps/gfss/ipt/files/R2R/Japan_WHT/21940000 and recreate it. Then I cd to /apps/gfss/ipt/files/R2R/Japan_WHT/ and gave permission to 21940000 directory using chmod -R 777 21940000/ I think there was some hidden directory inside 21940000 created by some other user and hence I was unable to do chmod before.
I can copy file but can not move it on my Mac Terminal. I want to execute following command: mv /apps/gfss/ipt/files/R2R/Japan_WHT/21940000/DATA.xlsx /apps/gfss/ipt/files/R2R/Japan_WHT/21940000/inprogress/And I get following error: mv: rename /apps/gfss/ipt/files/R2R/Japan_WHT/21940000/DATA_21940000.xlsx to /apps/gfss/ipt/files/R2R/Japan_WHT/21940000/inprogress/DATA_21940000.xlsx: Permission deniedHowever, if I use Finder, I can move it. But I want to do this from Terminal. But, I am able to copy: cp /apps/gfss/ipt/files/R2R/Japan_WHT/21940000/DATA.xlsx /apps/gfss/ipt/files/R2R/Japan_WHT/21940000/inprogress/Wondering what is that I am doing wrong!? Below is the ls -ltr output: gfss-apac-ipt2:21940000 admin$ pwd /apps/gfss/ipt/files/R2R/Japan_WHT/21940000gfss-apac-ipt2:21940000 admin$ ls -ltr total 384 -rw-r--r--@ 1 alokur wheel 193385 Nov 22 12:09 DATA_21940000.xlsx drwxr-xr-x+ 2 admin wheel 68 Nov 22 13:08 inprogressP.S.: Move does not work even when I do chmod 777 inprogress; and I get same error.
I can copy file but can not move it from my Mac Terminal [closed]
Given that google killed chrome://memory in March 2016, I am now using smem: # detailed output, in kB apparently smem -t -P chrom # just the total PSS, with automatic unit: smem -t -k -c pss -P chrom | tail -n 1to be more accurate replace chrom by full path e.g. /opt/google/chrome or /usr/lib64/chromium-browser this works the same for multiprocess firefox (e10s) with -P firefox be careful, smem reports itself in the output, an additional ~10-20M on my system. unlike top it needs root access to accurately monitor root processes -- use sudo smem for that. see this SO answer for more details on why smem is a good tool and how to read the output.
Since google chrome/chromium spawn multiple processes it's harder to see how much total memory these processes use in total. Is there an easy way to see how much total memory a series of connected processes is using?
Get chrome's total memory usage
This is a complicated question you're asking. Without knowing more about the nature of your threads it's difficult to say. Some things to consider when diagnosing system performance: Is the process/threadCPU bound (needs lots of CPU resources) Memory bound (needs lots of RAM resources) I/O bound (Network and/or hard drive resources)All of these three resources are finite and any one can limit the performance of a system. You need to look at which (might be 2 or 3 together) your particular situation is consuming. You can use ntop and iostat, and vmstat to diagnose what's going on.
Tried to run program X using 8 threads and it was over in n minutes. Tried to run same program using 50 threads and it was over in n*10 minutes. Why does this happen and how can I get optimal number of threads I can use?
Why using more threads makes it slower than using less threads
The entry in POSIX on "Signal Generation and Delivery" in "Rationale: System Interfaces General Information" saysSignals generated for a process are delivered to only one thread. Thus, if more than one thread is eligible to receive a signal, one has to be chosen. The choice of threads is left entirely up to the implementation both to allow the widest possible range of conforming implementations and to give implementations the freedom to deliver the signal to the "easiest possible" thread should there be differences in ease of delivery between different threads.From the signal(7) manual on a Linux system:A signal may be generated (and thus pending) for a process as a whole (e.g., when sent using kill(2)) or for a specific thread (e.g., certain signals, such as SIGSEGV and SIGFPE, generated as a consequence of executing a specific machine-language instruction are thread directed, as are signals targeted at a specific thread using pthread_kill(3)). A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.And in pthreads(7):Threads have distinct alternate signal stack settings. However, a new thread's alternate signal stack settings are copied from the thread that created it, so that the threads initially share an alternate signal stack (fixed in kernel 2.6.16).From the pthreads(3) manual on an OpenBSD system (as an example of an alternate approach):Signals handlers are normally run on the stack of the currently executing thread.(I'm currently not aware of how this is handled when multiple threads are executing concurrently on a multi-processor machine) The older LinuxThread implementation of POSIX threads only allowed distinct single threads to be targeted by signals. From pthreads(7) on a Linux system:LinuxThreads does not support the notion of process-directed signals: signals may be sent only to specific threads.
If a Unix (Posix) process receives a signal, a signal handler will run. What will happen to it in a multithreaded process? Which thread receives the signal? In my opinion, the signal API should be extended to handle that (i.e. the thread of the signal handler should be able to be determined), but hunting for infos on the net I only found year long flames on the linux kernel mailing list and on different forums. As I understood, Linus' concept differed from the Posix standard, and first some compat layer was built, but now the Linux follows the posix model. What is the current state?
What happens to a multithreaded Linux process if it gets a signal?
int pthread_attr_setstacksize(pthread_attr_t *attr, size_t stacksize);The stacksize attribute shall define the minimum stack size (in bytes) allocated for the created threads stack.In your example, the stack size is set to 8388608 bytes which corresponds to 8MB, as returned by the command ulimit -s So that matches. From the pthread_create() description:On Linux/x86-32, the default stack size for a new thread is 2 megabytes. Under the NPTL threading implementation, if the RLIMIT_STACK soft resource limit at the time the program started has any value other than "unlimited", then it determines the default stack size of new threads. Using pthread_attr_setstacksize(3), the stack size attribute can be explicitly set in the attr argument used to create a thread, in order to obtain a stack size other than the default.So the thread stack size can be set either via the set function above, or the ulimit system property. For the 16k you're referring to, it's not clear on which platform you've seen that and/or if any system limit was set for this. See the pthread_create page and here for some interesting examples on this.
As I understand, the default stack size for a pthread on Linux is 16K. I am getting strange results on my 64-bit Ubuntu install. $ ulimit -s 8192Also: pthread_attr_init(&attr); pthread_attr_getstacksize(&attr, &stacksize); printf("Thread stack size = %d bytes \n", stacksize);Prints Thread stack size = 8388608 bytesI'm quite sure the stack size is not "8388608". What could be wrong?
Default stack size for pthreads
“CPU(s): 56” represents the number of logical cores, which equals “Thread(s) per core” × “Core(s) per socket” × “Socket(s)”. One socket is one physical CPU package (which occupies one socket on the motherboard); each socket hosts a number of physical cores, and each core can run one or more threads. In your case, you have two sockets, each containing a 14-core Xeon E5-2690 v4 CPU, and since that supports hyper-threading with two threads, each core can run two threads. “NUMA node” represents the memory architecture; “NUMA” stands for “non-uniform memory architecture”. In your system, each socket is attached to certain DIMM slots, and each physical CPU package contains a memory controller which handles part of the total RAM. As a result, not all physical memory is equally accessible from all CPUs: one physical CPU can directly access the memory it controls, but has to go through the other physical CPU to access the rest of memory. In your system, logical cores 0–13 and 28–41 are in one NUMA node, the rest in the other. So yes, one NUMA node equals one socket, at least in typical multi-socket Xeon systems.
You can see the output from lscpu command - jack@042:~$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 56 On-line CPU(s) list: 0-55 Thread(s) per core: 2 Core(s) per socket: 14 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz Stepping: 1 CPU MHz: 2600.000 CPU max MHz: 2600.0000 CPU min MHz: 1200.0000 BogoMIPS: 5201.37 Virtualization: VT-x Hypervisor vendor: vertical Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 35840K NUMA node0 CPU(s): 0-13,28-41 NUMA node1 CPU(s): 14-27,42-55I can see that there are 2 sockets (which is like a processor ??) and inside each of the socket we have 14 cores. So, in total 2x14=28 physical cores. Normally, a CPU can contain multiple cores, so number of CPUs can never be smaller than number of Cores. But, as shown in the output CPUs(s): 56 and this is what is confusing me. I can see that Thread(s) per core: 2, so these 28 cores can behave like 2x28=56 logical cores. Question 1: What does this CPUs(s): 56 denote? Does CPU(s) denote number of Virtual/Logical core, as it cannot be a Physical core core atleast? Question 2: What does this NUMA node mean? Does it represent the socket?
Understanding output of lscpu
tar -c -I 'xz -9 -T0' -f archive.tar.xz [list of files and folders]This compresses a list of files and directories into an .tar.xz archive. It does so by specifying the arguments to be passed to the xz subprocess, which compresses the tar archive. This is done using the -I argument to tar, which tells tar what program to use to compress the tar archive, and what arguments to pass to it. The -9 tells xz to use maximum compression. The -T0 tells xz to use as many threads as you have CPUs.An update from January, 2024: Even when using the multithreading option, xz barely scales beyond two threads, besides its compression/decompression performance is relatively low. I highly recommend using ZSTD instead. The command will be: tar -c -I 'zstd -22 --ultra --long -T0' -f archive.tar.xz [list of files and folders]Caveats:This commands needs a lot of RAM, at the very least 5GB. You may want to reduce the compression level, or remove ultra/long options to decrease RAM consumption. Threading might not be used if you don't have enough source data (less than 1GB). If you still want to use threading for a small amount of data, decrease the compression level from 22 to say 20.
I usetar -cJvf resultfile.tar.xz files_to_compressto create tar.xz andtar -xzvf resultfile.tar.xzto extract the archive in current directory. How to use multi threading in both cases? I don't want to install any utilities.
How to use multi-threading for creating and extracting tar.xz
Threads are an integral part of the process and cannot be killed outside it. There is the pthread_kill function but it only applies in the context of the thread itself. From the docs at the link:Note that pthread_kill() only causes the signal to be handled in the context of the given thread; the signal action (termination or stopping) affects the process as a whole.
$ ps -e -T | grep myp | grep -v grep 797 797 ? 00:00:00 myp 797 798 ? 00:00:00 myp 797 799 ? 00:00:00 myp 797 800 ? 00:00:00 mypThis shows the process myp with PID = 797 and four threads with different SPIDs. How can I kill a particular thread of the process without killing the whole process. I understand that it might not be possible at all in some cases when there are fatal dependencies on that particular thread. But, is it possible in any case? Is yes, how? I tried kill 799 and the process itself was terminated. Now I am not sure this was because there were dependencies that made myp fail without the process 800 or because kill is simple not able to kill individual processes.
How can I kill a particular thread of a process?
As Celada mentioned, there would be no point to using multiple threads of execution since a copy operation doesn't really use the cpu. As ryekayo mentioned, you can run multiple instances of cp so that you end up with multiple concurrent IO streams, but even this is typically counter-productive. If you are copying files from one location to another on the same disk, trying to do more than one at a time will result in the disk wasting time seeking back and forth between each file, which will slow things down. The only time it is really beneficial to copy multiple files at once is if you are, for instance, copying several files from several different slow, removable disks onto your fast hard disk, or vice versa.
Is there a multi-threaded cp command on Linux? I know how to do this on Windows, but I don't know how this is approached in a Linux environment.
Multithreaded cp on linux? [duplicate]
I've used Stéphane Chazelas' pv-based solution for some time, but found out that it exited randomly (and silently) after some time, anywhere from a few minutes to a few hours. -- Edit: The reason was that my PHP script occasionally died because of a max execution time exceeded, exiting with status 255. So I decided to write a simple command-line tool that does exactly what I need. Achieving my original goal is as simple as: ./parallel.phar 5 20 ./my-command-line-scriptIt starts almost exactly 5 commands per second, unless there are already 20 concurrent processes, in which case it skips the next execution(s) until a slot becomes available. This tool is not sensitive to a status 255 exit.
I have a command-line script that performs an API call and updates a database with the results. I have a limit of 5 API calls per second with the API provider. The script takes more than 0.2 seconds to execute.If I run the command sequentially, it will not run fast enough and I will only be making 1 or 2 API calls per second. If I run the command sequentially, but simultaneously from several terminals, I might exceed the 5 calls / second limit.If there a way to orchestrate threads so that my command-line script is executed almost exactly 5 times per second? For example something that would run with 5 or 10 threads, and no thread would execute the script if a previous thread has executed it less than 200ms ago.
How to run a command at an average of 5 times per second?