output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
You can create a .desktop file: $ cd Desktop $ touch eclipse.desktopOpen it with your favorite text editor (gedit, for example): $ gedit eclipse.desktopAdd this to that file: [Desktop Entry] Type=Application Name=Name of your application Icon=/path/to/icon Exec=/path/to/applicationFinally, make it executable: $ chmod u+x eclipse.desktopFinal result should be something like:
Today i downloaded the latest installer(eclipse-inst-linux64.tar.gz) of eclipse from the official website and i installed in my system. now i want to create a shortcut to launch the program. how can i do that ? if i double click on the eclipse file which is selected as the screenshot i am able to launch the program, someone help me to create a shortcut
How to create a shortcut for eclipse
Create a new template file for a launcher As indicated here, the ~/Templates folder can be used to add new options under the context menu 'New document'.So:gedit ~/Templates/New Launcher.desktopwith this content: [Desktop Entry] Type=Application Name= Icon= Categories=System;Settings; Exec= Terminal=falseOpen it as text, fill the different lines as desired and save. For an internet link, write something like Exec=firefox <your link>. Then, double click it and select "Trust and launch" to see the proper name.
The idea is to be able to create a shortcut from context menu in order to access an application or even an internet link. The gnome-desktop-item-edit (as indicated here) depends on the gnome-panel package that is not available on all systems. Is there another way?
Add 'Create launcher' to Nautilus context menu (without `gnome-desktop-item-edit`)
You can install alacarte to you edit menu entries. In Debian-based distributions: sudo apt-get install alacarteOtherwise, as you noted, the information is in .desktop files in the given locations (in particular, the Exec line). I just do: grep -iR "name that shows up in menu" ~/.local/share/applications /usr/share/applicationsThen just look the the Exec lines of the files which seem to be likely candidates.
I have been wanting to inspect the executable lines from these application shortcuts for several applications as I am having trouble opening each, or would like to know what settings they use by default to start the program. Any way, I would like to know how to do this. It is a common flow for me to see how the program was set-up and inspect where it is failing at each step in the process. I do this in Windows a lot and end up right-clicking the shortcut in the Start Menu or Taskbar, with or without modifier keys being held down at the same time so that I can get the correct context menu items to show up, and select "Properties". I would like the equivalent in Linux-based OS distributions, for at least Ubuntu, Mint, and elementaryOS. For more information: I am still learning how things are usually for these types of operating systems, but through some non-trivial amount of effort, have found that the information I am looking for is usually stored as *.desktop files, and they can be found under at least these 3 directory paths:/usr/local/share/ /usr/share/applications/ ~/.local/share/applications/For example, I want to create a shortcut to the "Remote Desktop Viewer" application that comes with elementaryOS, so that it opens a connection to a certain host automatically, without having me to click buttons and enter in connection information into the dialog. I would like to avoid having to do guesswork for internet searching for the matching command line executable.Edit: Found out the application executable file's name was "Vinagre"; who would have thought of that.. This is the trouble I would like to avoid in the future :)
How can I view an application shortcut's content to find what exact executable line it runs?
The custom shortcuts seem not to be saved as an overall scheme, but as separate groups of shortcuts. In the Custom Shortcuts window select Edit - New group, if you don't have one or more already.Check to enable the group and then drag&drop the shortcuts you already have onto the group. To save a group, right click on it and export:
It is possible for the global shortcuts in the dedicated GUI.Is it possible for the custom shortcuts too? A similar option as above is not present in the KDE4 GUI for custom shortcuts.But I imagine there must be some file to back-up.
Export and import KDE custom shortcuts?
Actually this function has deliberately been omitted from elementaryOS. It was a specific design choice. But you can at least enable My Computer or Trash bin. With Pantheon (the default file manager in Luna) you can’t bring files or folders onto the desktop. But in Nautilus, it’s possible. Run sudo apt-get install gnome-tweak-tooland install it. Then type the same gnome-tweak-tool to launch it. From the Desktop menu you can adjust the desktop to your liking. Check Have file manager handle the desktop and other options like display My Computer or Trash icon. However, we need to make these changes boot with the OS. Go to the System Settings > Startup Applications > Add. Input the Name (Nautilus, for example) and type nautilus -n in Command.
Is there any way to have files in desktop in Elementary Luna OS. I have checked and tried installing tweak but couldn't move the files to desktop.
Desktop shortcuts in Elementary OS
I figured out a solution to this problem myself that don't even involve modifying the Gnome source code. It is not what I initially looked for but it works perhaps just as well. In dconf-editor in /org/gnome/dekstop/wm/keybindings/ I just changed the following two settings: switch-applications=['<Super>Tab', '<Alt>Tab', '<Alt>l'] switch-applications-backward=['<Shift><Super>Tab', '<Shift><Alt>Tab', '<Alt>h']The first two keybindings are the Gnome defaults, whereas the last one is added by me. Of course, this has the (initially unintended) side-effect of bringing up the application switcher whenever I hit <Alt>h or <Alt>l, but since they're not previously used for anything, this could perhaps be a justifiable behavior. EDIT: The proposed solution works fairly well! I have tested it a bit and it suits my workflow (where I use hjkl for just about everything, being a Vim user). However, I have "stress tested" it a bit and discovered two minor inconsistencies, which are due to the fact that the switch-applications* events are not actually the same as the ones which are hard-coded to the arrows in the application switcher. First, <Alt>Left and <Alt>Right will not bring up the application-switcher if it's not already there, unlike the recently proposed <Alt>h and <Alt>l. It seems to me a natural extension of the default behavior that they should. This can be fixed as follows: switch-applications=['<Super>Tab', '<Alt>Tab', '<Alt>l', '<Alt>Right'] switch-applications-backward=['<Shift><Super>Tab', '<Shift><Alt>Tab', '<Alt>h','<Alt>Left']Second, <Super>Tab can be used as an alternative to <Alt>Tab in Gnome (and likewise with the shift-key). Since <Super>l (or h) is not mapped to switch-applications*, using hor lwill not work in this case. It would be an easy thing to add this, but beware that they are by default mapped to minimizing a window and locking the screen so you would have to remove those keybindings. Also, <Super>Left and <Super>Right is mapped to tiling windows to the left/right part of the screen. If you again want to use Vim-style hjkl for these, you have three things <Super>l (and h) might be used for so you'd have to choose (unless you're up for some source code editing of the application switcher). By the way, for those interested in using Vim keybindings in Gnome, I maintain a more complete set of keybindings in my Git repository at https://github.com/sigvaldm/gnome-dconf.
In Gnome 3 you can enter dconf-editor and navigate through lots of settings. Amongst others you can navigate to /org/gnome/desktop/wm/keybindings/ to find that Alt+Tab brings up the application switcher. You can change the keybinding for the application switcher or even add new ones in addition to the ones already present. However, once the application switcher is open, and while you're still holding down Alt, you can use the arrows to navigate within it. I'd like to add custom keymappings hjkl in addition to the arrows, but I cannot find the keymappings for this any place in the dconf-editor (yes, I actually looked through the whole thing and didn't find it). Does anyone know where I can find these settings? Thanks.
Changing keybindings for arrows in Alt+Tab application switcher in Gnome 3
Not sure about Gnome/XFCE specific options, but [xbindkeys] (https://wiki.archlinux.org/index.php/Xbindkeys) can do this. Configure it with a ~/.xbindkeysrc file and run xbindkeys during your X session. From the default config file # The format of a command line is: # "command to start" # associated keyWhere the command can be a shell command, alias, or program (functions didn't seem to work for me). e.g. I use it to handle my volume keys "~/apps/pa-vol.sh mute" XF86AudioMute "~/apps/pa-vol.sh minus" XF86AudioLowerVolume "~/apps/pa-vol.sh plus" XF86AudioRaiseVolumeRun xbindkeys -k to capture a keystroke for inclusion in your config.
Suppose I have an alias or a function defined in my .bashrc, that is not enough complex to write a single script. Is it possible to bind that alias/function to the shortcuts facility provided by the graphical interface Gnome or xfce4 ?
How can I use a keyboard shortcut from an bash alias or function ?
A symlink already behaves like a Windows shortcut, as it contains only a path to the target file. It just looks like the symlink is targeting the file to install the game instead of the file to run the game.
I'm trying to launch a game from a symbolic link located on my desktop, but every time I do the game installs all its files onto my desktop. And when I try relaunching the game after moving all the files to the game folder, the game tries reinstalling the files again. Is there any way I can fix this? Or are there any alternatives to symbolic links that act more like windows shortcuts?
Getting symbolic links to behave more like shortcuts in Windows [closed]
Since your request seems to be for GUI only, I'd suggest following jofels comment about .desktop files. Archlinux has a quite good short summary over the interesting keys in .desktop files. A suggestion would be [Desktop Entry] Name=Whatever you want Type=Application Path=/home/user Exec=env WINEPREFIX="<prefix>" wine appDirectory/application.exeThe env WINEPREFIXpart may not be required, but use it if you have multiple prefixes or for good measure. Place the <Whatever you want>.desktop file in your $HOME/Desktop folder and you should be set with a new icon that you can click. This file would give you at least appDirectory/application.exe to grep for. Path is where the command will be executed. As such, you can modify the Path and Exec proportions to get more of the path in Exec if you need it for grepping. So with Path=/ and the rest in Exec, you will get the requested behaviour. However, the need for grepping like this is not clear to me. If it's only run as one instance at a time, you could create a PID file instead and check for if the PID exists, like presented here but modified to your needs. This answer might also be of interest if you decide to go for a PID file.
Sorry if this seems a daft question but I'm still new to Linux. Is there an equivalent of a Windows Shortcut in Linux (as opposed to a link) The problem I have is this: I have an application that sits in /home/user/appDirectory/application.exe (its a Windows App running under Wine) I then have a monitoring script that looks for that application to see if it is running. e.g. application="/home/user/appDirectory/application.exe"if pgrep -f "$application" > /dev/null then is_running=1 else is_running=0 fiSo far so good. But I need to put a 'shortcut' on the desktop so that anyone can go in and easily stop/start the application. If I create a link (and put that on the desktop) and start it from that then the path becomes /home/user/desktop/link to appDirectory/application.exe and the monitoring script can't see it. Is it possible to create a shortcut, that when opened opens as the original location, so the application then starts from the original location?
Make a shortcut to a program that changes to its directory
You can configure global shortcuts to "raise or run" any app. The best way I can think of is using the command wmctrl like this wmctrl -xa Mail.Thunderbird || thunderbirdThis tries to focus on Thunderbird window and otherwise runs the command after ||. You can see a list of your currently opened windows with:wmctrl -lxG
In Windows, there is a shortcut to open the application that is on the dock panel. For example, Super + 1 opens the first application, Super + 2 - second and so on. Is it possible to do so in Deepin?
How to make shortcut in Deepin to open an application that is on the dock panel?
You just need to say that you want an (-i) interactive shell and it'll load up the extra files which express your shell preferences. So: bash -i -c 'echo $PATH; $SHELL -i' You could also just conditionally echo the path in your .bashrc or .bash_profile and use the environment to trigger that, something like: if [ "" != "$echopath" ]; then echo $PATH fiThen your shell could just be: env echopath=1 bash -i
I have created an application shortcut in Ubuntu like this: [Desktop Entry] Encoding=UTF-8 Type=Application Exec=bash -c 'echo $PATH;$SHELL' Icon=/home/mani/Desktop/omnetpp-5.0/ide/icon.png Terminal=true Name=Sample Application Categories=Development;ApplicationI saved it with sampleApp.desktop name. Double clicking on the shortcut shows me this:But, the actual value of $PATH is this:My guess is that, double clicking on the shortcut, runs the application in a non-interactive shell and the content of my .bashrc will not be parsed. How can I print the full $PATH using desktop shortcut?
Printing $PATH variable using desktop shortcut
Since you want to write to different disks which can be available just part of the time, what you need is file synchronization. There are many options for this. Syncthing works for me although I don't sync local paths.
I want to create a sort of shortcut that when I write to it, it will write the same at X different places at the same time For example KDenLive write /home/user/multiPlaces/untitled.mp4 So the OS write in /home/user/externalHardDrive/untitled.mp4, /home/user/cloudPlace/untitled.mp4 and /home/user/local/untitled.mp4 All of that at the same time whatever the program who ask to write in /home/user/multiPlaces/
write file to multiple places (different file-systems) [closed]
Given that running it with the utility kioclient exec is almost in guaranty that the problem is some missing additions to the environment variables which can be checked comparing env in the terminal against env in the script that the Desktop file is referencing. Note that adding a shebang in bash will do nothing about modifications to the environment at ~/.bashrc because it is only executed in interactive bash sessions, as the manpage steht.
I have created a desktop shortcut for a npm application called TMXEditor, but it doesn't work. I can launch the app if I do cd /home/souto/Apps/maxprograms/TMXEditor && npm start on a terminal. I put that in a bash file /home/souto/Apps/maxprograms/TMXEditor/start.sh. The application runs if I just run that script in a terminal. Its exact contents are: #!/bin/bash cd /home/souto/Apps/maxprograms/TMXEditor && npm startSo I have created the .desktop file pointing to that: [Desktop Entry] Name=TMXEditor Exec=/home/souto/Apps/maxprograms/TMXEditor/start.sh Icon=/home/souto/Apps/maxprograms/TMXEditor/icons/tmxeditor.png StartupNotify=true Terminal=false Type=Application Categories=Translation; Comment= Path=/home/souto/Apps/maxprograms/TMXEditorThe .desktop file is saved as /home/souto/.local/share/applications/TMXEditor.desktop. I can see the shortcut in Rofi, but when I run it from there the application will not start. I have also tried putting Exec=xfce4-terminal -e "/home/souto/Apps/maxprograms/TMXEditor/start.sh. In that case, I can see a terminal blinking for a fraction of a second but still the application will not run. Both the .desktop and the bash files are executable: -rwxrwxr-x 1 souto souto 296 feb 19 14:27 /home/souto/.local/share/applications/TMXEditor.desktop -rwxrwxrwx 1 souto souto 67 feb 19 13:53 /home/souto/Apps/maxprograms/TMXEditor/start.shMy desktop environment is Xfce 4.18 (on arch linux) and I normally use zsh 5.9 as the shell. I'd appreciate some help to debug this shorcut. Thanks.
How can I start a NPM app from a desktop shortcut?
You don't need to cd at all. Just use the full path: /home/YOUR_USERNAME/Applications/Flameshot/Flameshot-0.10.1.x86_64.AppImageFor example, if your username is elad, then you would put: /home/elad/Applications/Flameshot/Flameshot-0.10.1.x86_64.AppImageIf you are unsure what the full path is, you can check by opening a terminal and running: readlink -f ~/Applications/Flameshot/Flameshot-0.10.1.x86_64.AppImageThen you can put whatever that returns in the GUI.
I've installed a new version of an app. Currently it's GUI shortcut leads to /usr/bin/flameshotBut I've installed it in a different path and now I run it from the terminal like this: cd ~/Applications/Flameshot; ./Flameshot-0.10.1.x86_64.AppImage How can I replace the GUI shortcut to run the new version without blocking the terminal (background)
How to run 2 lines in GUI command line shortcut
In my system it looks like this: Applicatios (the Start Menu) -> Settings -> Keyboard -> Application Shortcuts tab -> Add button And in command input we could to put this: xfce4-screenshooter -fs '/home/user_name/Images'command. For another options look at a xfce4-screenshooter via comand line section.
I use the Cinnamon desktop environment, but shortcuts like Ctrl + Alt + T for opening the Terminal or Prt Sc for Print Screen don't work. How can I set these two shortcuts in the Cinnamon desktop environment and other, but these two are the most important for me? Versions of my software:Debian 9.8 (x86-64) Cinnamon 3.2.7
How do I set standard behavior for common shortcuts in Cinnamon?
That's xkill. It is shipped with Xorg, which is the standard X11 server, so likely you already have it installed. In any window manager or desktop environment, you can associate a shortcut with a command. The way that is done varies, look into the documentation.
There is a feature in X11 that temporarily transforms your mouse pointer into a "kill X11 application" icon (I don't even know what to call it) and allows you to forcibly terminate a process that owns an X11 window. On occasion I've activated this feature accidentally and want to be able to do this consistently. What is the keyboard, mouse shortcut, or other means of activating this feature? How does it work? is it a legacy feature and/or only works under some environments?
Terminate X11 Application with Mouse
I don't believe this is possible with ufw. ufw is just a frontend to iptables which also lacks this feature, so one approach would be to create a crontab entry which would periodically run and check if the IP address has changed. If it has then it will update it. You might be tempted to do this: $ iptables -A INPUT -p tcp --src mydomain.dyndns.org --dport 22 -j ACCEPTBut this will resolve the hostname to an IP and use that for the rule, so if the IP later changes this rule will become invalid. Alternative idea You could create a script like so, called, iptables_update.bash. #!/bin/bash #allow a dyndns nameHOSTNAME=HOST_NAME_HERE LOGFILE=LOGFILE_NAME_HERECurrent_IP=$(host $HOSTNAME | cut -f4 -d' ')if [ $LOGFILE = "" ] ; then iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT echo $Current_IP > $LOGFILE else Old_IP=$(cat $LOGFILE) if [ "$Current_IP" = "$Old_IP" ] ; then echo IP address has not changed else iptables -D INPUT -i eth1 -s $Old_IP -j ACCEPT iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT /etc/init.d/iptables save echo $Current_IP > $LOGFILE echo iptables have been updated fi fisource: Using IPTables with Dynamic IP hostnames like dyndns.org With this script saved you could create a crontab entry like so in the file /etc/crontab: */5 * * * * root /etc/iptables_update.bash > /dev/null 2>&1This entry would then run the script every 5 minutes, checking to see if the IP address assigned to the hostname has changed. If so then it will create a new rule allowing it, while deleting the old rule for the old IP address.
I run a VPS which I would like to secure using UFW, allowing connections only to port 80. However, in order to be able to administer it remotely, I need to keep port 22 open and make it reachable from home. I know that UFW can be configured to allow connections to a port only from specific IP address: ufw allow proto tcp from 123.123.123.123 to any port 22But my IP address is dynamic, so this is not yet the solution. The question is: I have dynamic DNS resolution with DynDNS, so is it possible to create a Rule using the domain instead of the IP? I already tried this: ufw allow proto tcp from mydomain.dyndns.org to any port 22but I got ERROR: Bad source address
UFW: Allow traffic only from a domain with dynamic IP address
Right click on the Network Manager icon on Ubuntu top panel and select edit. Go to Wired Network or Wireless Network tab and select the network name. Click on the edit button and go to IPv4 settings tab on the new window. If the method is Automatic (DHCP) you are using dhcp. Other method is cat /var/log/syslog and check for some thing like below DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 6 DHCPOFFER from 10.100.1.254 DHCPREQUEST on eth0 to 255.255.255.255 port 67 DHCPACK from 10.100.1.254If you have some thing similar to above. You are using DHCP (IP addresses could be different)
How can I find out if my IP address is DHCP, Fixed or Dynamic? I need to tell my network administrator what the IP address is, that my Virtual Machine is using. I know the numbers, but I don't know if it is fixed or not. I have tried: ifconfig and that returned my IP address.
How to find out if Ubuntu is using DHCP (Ubuntu 12.04 LTS GUI)
In addition to Tony´s answer, of querying OpenDNS, which I use in my scripts upon logging on to my servers to display both the local machine and remote public IP address: echo `hostname` `hostname -i` `dig +short +time=1 myip.opendns.com @resolver1.opendns.com`Google also offers a similar service. dig TXT +short o-o.myaddr.l.google.com @ns1.google.com | awk -F'"' '{ print $2}'If you have a private IP address, behind a home or corporate router/infra-structure, or even if you are your own router, these services in the Internet will reveal the public IP address you are using to reach them, as it is what arrives to them doing the request. Please do note that the above methods only work if the Linux machine in question has direct access to the Internet. If your Linux server is your router, besides you being able to have a look at your current interfaces, you might also do: hostname -iAs normally the public IP address is often the main/first interface. If not the first interface, you might also do: $hostname -I 95.xx.xx.xxx 192.168.202.1 192.168.201.1 Which shows you all the IP addresses of the machine interfaces. Please read too: How To Find My Public IP Address From Command Line On a Linux Again, if the Linux server is the router, it might be interesting to place a script in /etc/dhcp/dhclient-exit-hooks.d to track and act on your IP changes, as I documented in this question: Better method for acting on IP address change from the ISP?
The easiest/simplest understanding of the web is a. When you connect to your ISP, the ISP gives a dyanmic address (like a temporary telephone number) only for the duration of that connection, the next time you connect, you will again have a different dynamic IP Address. b. You use the browser to to different sites which have static IP Address (like permanent numbers or/and permanent address of an establishment). Now is there a way to get self's IP address instead of going to a web-service like whatismyipaddress.com. The connection is as follows :- ISP - Modem/Router - System Edit - The Modem/Router is a D-Link DSL-2750U ADSL router/modem. http://www.dlink.co.in/products/?pid=452 I did see How to track my public IP address in a log file? but that also uses an external web-service, it would be better/nicer if we could do without going to an exernal URL/IP address for the same.
Is there a way to find self's dynamic public ip address using cli in Debian? [duplicate]
This depends on how similar to DynDNS.org this service should be. For your seemmingly small use case I would propably set up a combined DHCP/bind-server (with Linux - what else). The DHCP server is able to update your DNS-server that acts as primary server for a subdomain of "your" provider-domain. Make sure to register that subdomain with a short TTL or register your sub-domain at your provider as "to be forwarded to". The more complicated part is assigning fixed names for your DSL-machines. Do you control them/have a fixed number with not changing fixed MAC-adresses? The lease-time for DHCP should be > 1 day, so the same client gets the same IP+name again. Update: I found someone with exactly your problem and the solution here. There is a Open Source project named GNUdip that should fulfill your requirements.
You all probably know commercial dynamic DNS providers like dyndns.org or no-ip.com. But how to create a similar service for just a handful of machines? What server software would be best suited for such a setup (under Linux)? Advantages:the service would be under your control no tracking by some opaque companyMinimal requirements: Probably something like: you own at least one host machine with a static IP, a domain and your domain provider let you configure DNS records. Clients: A few machines that are connected via cable/DSL and only get dynamic IP addresses on each dial-up and/or every x hours.
How to create a custom dynamic DNS solution?
If you want to stick with AWS tools follow these steps:Create an AWS IAM User, e.g. dns-updater and assign it this AWS managed policy: AmazonRoute53FullAccess. Generate secret and access keys for the user. Install AWS-CLI (e.g. pip install awscli) Configure AWS-CLI, enter the above secret and access keys: aws configureFrom a cron job on the RPi run a script that does the following:Obtain the external public IP, e.g. RPI_EXT_IP=$(curl http://ifconfig.co) Create an update JSON file: cat > /tmp/r53-update.json << __EOF__ { "Changes": [ { "Action": "UPSERT", "ResourceRecordSet": { "Name": "rpi.your-route53-domain.com", "Type": "A", "TTL": 600, "ResourceRecords": [ { "Value": "${RPI_EXT_IP}" } ] } } ] } __EOF__Call AWS-CLI to update the Route53 record using the above JSON file, replace the hosted zone id with a real id of your Route53 zone: ~ $ aws route53 change-resource-record-sets \ --hosted-zone-id ZXCVBNMEXAMPLE \ --change-batch file:///tmp/r53-update.jsonLet us know if you need any clarification. Don't forget to accept the answer if it helped :)
A raspberry pi at home running Rasbian Jessie 8.0 is running Apache. Using dig TXT +short o-o.myaddr.l.google.com @ns1.google.com produces an IPv4 that is used to update the value in the record sets of the hosted zone in AWS's route 53 circled in red in the following image.Testing the domain name is successful. What I'd like to do now is update AWS Route53 whenever my home's dynamic IP address changes from within the raspberry pi with out any assistance from me. Please let me know if you require anymore information.
How does one automatically update Route53 from a raspberry pi server at home?
Some context: When a program asks your machine to resolve a hostname into a IP address it looks into your /etc/hosts and, if not found, it then makes a DNS query. You don't need to keep a non-loopback IP address on it. You can just usually keep the localhost entries and an alias. See, that's my /etc/hosts contents: [braga@coleman ~]$ cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 127.0.0.1 coleman.jazz coleman ::1 localhost6.localdomain6 localhostcoleman.jazz or coleman (named for the musician, Ornette Coleman) is just an alias for my machine. Direct answers:Just leave it out. You can replace it wherever you want to. it's just an alias. You can even replace it with www.google.com (and www.google.com on your machine will point up to your own machine).
I am setting up a RHEL-based server that is associated with dynamic DNS from DynDNS, with a domain of, say, "abc.dyndns.org" that is dynamically updated with the server's IP address. I have read that in order to ensure access to your server's services, you need to have at least the following in your /etc/hosts:127.0.0.1 localhost.localdomain localhost xxx.xxx.xxx.xxx redhatbox.yourcompany.com redhatboxWhere "xxx.xxx.xxx.xxx" is whatever IP address your server has, and "redhatbox" would be the name of the computer. So here are my questions: (1) Because my server has an IP that is dynamically assigned by my ISP's DHCP, there is no one IP I can put in place of xxx.xxx.xxx.xxx, what should I do in this case? (2) Should I simply replace "redhatbox.yourcompany.com" with my DynDNS domain "abc.dyndns.org"? And replace the "redhatbox" alias with "abc"? If anyone can explain all this for a novice like me that would be great. Thank you very much for your detailed answers and patience.
Hosts file on server with dynamic DNS?
I do the following which has worked well for me the last 10+ years. I setup a dynamic DNS name on a service such as DynDNS (which was free until this year) or some other such provider. This gives me a foothold so that my constantly changing IP will always be rooted in a static name such as sam.dyndns.org. I then create CNAMEs in bind that point to this static name and voila I have permanent names.
I intend to run a debian server at home, it will host various websites, SSH server and email. I have a dynamic IP address and I am unwilling to pay the extra for a static IP. I was thinking I could probably get around the DNS issue if I ran my own name server and used something like no-ip to set auto-updated nameserver addresses for my registered domains, eg: On the registrar: john-hunt.com (and my other domains) nameservers = johnns1.noip.com & johnns2.noip.comjohnns1.noip.com, johnns2.noip.com -> my dynamic IPWhich will make sure that the nameservers for my domains are always pointing to my machine at home. I will run BIND or something similar on the home machine to actually serve up the DNS records. The real problem I have is that I don't quite know how I'd configure BIND (or tinydns or whatever) to accept and apply updates when my IP address changes.. I can think of a way to bodge it (poll & ping johnns1.noip.com to get my IP address, then grep on the zonefiles and reload every 5 minutes..) but that doesn't feel very solid. Does anyone have any experience in this area? I had a look at no-ip's enhanced services but they want $25 for hosting records for every domain (and I have quite a few).
Running my own dynamic DNS record hosting
As long as the DD-WRT router is also the DHCP Server for the network, you can set up a static DHCP lease for the server in DD-WRT in Services > DHCP Server under the Static leases section. This will make sure that the DHCP server always hands out the same IP address for your server when it asks for a DHCP lease. In order to do this, you need to know the MAC address for the server. You can determine this with the command ip addr on the server by looking at the link/ether entry for the interface connected to the router (most likely wlan0 in your case). Alternatively, you can also get the MAC address with ifconfig (run with /sbin/ifconfig as regular user on Ubuntu, /sbin is most likely not in your PATH). In that case look at the HWaddr entry for the relevant interface. Fill in the MAC Address, hostname for the server and the desired IP address to the list of static leases in DD-WRT, Save, and Apply Settings. Note that if the IP address you set up for the static lease is not the one currently assigned to the server, you'll need to have the server give up its current lease and request a new lease from the DHCP server in order for it to get the permanent IP address.
I'm very new to the networking side of the house and am trying to set up SSH on my new Ubuntu machine. I have my DD-WRT router working with DDNS now and can log in to the router page -- but I'm not sure how to go about forwarding it to my server machine. I realize I would forward port 22 over (or a unique port) -- but to forward, DD-WRT wants to know the IP I'm pointing at. But that IP will change, won't it? How do I make the router point to the machine rather than the current IP? Or am I asking the wrong question?
SSH Setup - How will the router know my computer's ip?
Adding to John's answer, the problem ended up being that I had configured my ddclient incorrectly. Changing use=if, if=eth0 to use=web in ddclient.conf fixed the problem for me.
First question I've asked here so please forgive me if I accidentally break a rule. I recently set up an ssh server on my home machine, and am using ddclient to keep the dynamic DNS service at home_hostname.my_domain.me updated with the home machine's address. The domain and the dynamic DNS service are provided by Google Domains. When I try to ssh or remote desktop (using VNC) into my home machine from work via ssh home_hostname.my_domain.me or the Remmina VNC client, I somehow end up connecting back to my own work machine. I tried this from several different computers, all with the same result: the DNS server directs them back to themselves. Can anyone explain what I did wrong to cause this, and how to fix it? If my ddclient conf files are needed to diagnose the problem I can provide them in a few hours when I get home from work.
ssh and VNC connections are directed back to the original machine by dynamic DNS server
Use stateful firewall rules. Connection state for stateful rules is handled by Netfilter's conntrack subsystem and can be used from nftables. The goal is to allow (select) outgoing packets, let them be tracked (automatically) by conntrack and allow back as incoming packets, only those that are part of the flow initially created in the outgoing part. conntrack works automatically as soon as a rule references it (any ct expression). In addition it should work automatically in the initial (host) network namespace as soon as loaded even without rule. As OP didn't provide the complete ruleset, I'm just replacing rules and don't attempt to create a full ruleset (eg: allowing packets on the lo interface is quite common, or maybe the output chain could also have a drop policy). Not trying simplifications (eg recent nftables/kernel allow a single rule for TCP and UDP). This becomes: table inet tb { chain input { type filter hook input priority 0; policy drop; ct state established,related accept .... .... } chain forward { .... } chain output { ..... ct state established accept udp dport 53 accept tcp dport 53 accept } }The ephemeral ports aren't used anymore in the ruleset (there's not even need to specify source port 53). An incoming packet which is a reply to the outgoing packets to port 53 will be automatically accepted. The related part also allows related packets, such as ICMP errors when a destination is unreachable, to be also accepted (thus preventing a timeout in this case). One can now also follow flow states using these command (to be run in the same network namespace as the application in case containers are involved): For a list: conntrack -Lfor (quasi-realtime) events: conntrack -Eor more specifically with these two commands for example (running in two terminals): conntrack -E -p tcp --dport 53 conntrack -E -p udp --dport 53Of course there's much more about all this. Further documentation:Stateful firewall Connection Tracking System Matching connection tracking stateful metainformation
I am using Ubuntu 20.04 OS with dnsjava client library to query DNS servers. I have nftables rule in this machine which block all traffic on ports except ephemeral port range 32768-61000 which will be used by dnsjava to get results from DNS server. table inet tb { chain input { type filter hook input priority 0; policy drop; tcp dport 32768-61000 accept udp dport 32768-61000 accept .... .... } chain forward { .... } chain output { ..... } }It looks like allowing 32768-61000 range might be security flaw. But completely blocking this port range is adding latency in dns resolution and many failure due to timeout. Is there way we can avoid this rule allowing port range in nftables? Is there any nftable feature which we can use to avoid this without impacting dns resolution latency?
How to avoid allowing ephemeral port range rule in nftables
With a reasonably modern OpenSSH, you can run a shell command to select a Match block in ~/.ssh/config. Assuming you have a script am-on-home-network that returns 0 when executed on your home network and 1 when executed outside: Match Host myserver exec "am-on-home-network" HostName myserver User iago-lito Port 22Host myserver HostName myserver.ddns.net User iago-lito Port 22For am-on-home-network, you can use arp to explore the local network. Look for your home router's MAC address. (Looking for IP addresses is unreliable because many private networks use the same ranges of private IP addresses.) #!/bin/sh timeout 0.2 arping -f -q -I eth0 12:34:56:78:9a:bcAdjust the MAC address to the MAC address of your router that your computer sees when it's at home. Adjust eth0 to the network interface on your computer that is used to connect to your home router.The pure SSH approach has the advantage that it can be done in userland, but it only works for SSH, and it increases the connection establishment delay noticeably. A better solution is to run a DNS server at the system level, and configure it to serve the local IP address the global name myserver.ddns.net when on the local network. Dnsmasq is a small, simple DNS cache and server, suitable for running on an endpoint machine or a small network. If you aren't already running a DNS cache on your machine, it will make general Internet usage a bit faster. Ubuntu runs dnsmasq by default. In dnsmasq, create a file /etc/dnsmasq.d/home-server containing host-record=myserver.ddns.net,192.168.2.1Add the following script to your network startup scripts (whatever they are on your distribution): #!/bin/sh comment=\# if timeout 0.2 arping -f -q -I eth0 12:34:56:78:9a:bc; then comment= fi sed -i "\$s/^#*/$comment/" /etc/dnsmasq.d/home-server service dnsmasq restartIf your system sets up Dnsmasq through D-Bus, editing the configuration file isn't the best option, and I don't even know if it'll work. You would need to call dbus-send to add or remove the host record based on the output of arping. Or, if you're using NetworkManager, configure it to set the host entry on the connection corresponding to your home network.
I have set up a local ssh server, which I like to access with this neat alias from my local network: ~/.ssh/config: Host myserver-local HostName 192.168.2.8 User iago-lito Port 22In order to access it remotely, I have set up a no-ip account to access it via a dyndns IP resolution, which I like to access with this neat alias from any other network: ~/.ssh/config: Host myserver HostName myserver.ddns.net User iago-lito Port 22Unfortunately, because my router do not allow NAT loopbacks, I need to use: ssh myserverwhen I'm away and: ssh myserver-localwhen I'm at home.. which makes scripting quite annoying when it comes to automately scp, git push etc. How could I make the same alias work in both cases?
Make hostname adapt to local/remote situation
I am a bit confused about your setup. Maybe I am misunderstanding it. Anyhow, the way it's normally done is to have one central place to configure everything (in your case, that should probably your router). Then you don't have to care about the configuration of the RaspPi's. In fact, you can configure them identically; all differences will be resolved by the RaspPi's using DHCP. If you look at dnsmasq's man page, it can read /etc/ethers (man ethers for details) to give each RaspPi a static IP based on the RaspPi's MAC address. It also reads /etc/hosts to provide DNS resolution for those static IP addresses, so you can name your RaspPi's however you want. If you do it that way, a plain out-of-the-box dhcp client on the RaspPi's should suffice. You don't need dhcpd anywhere. Editbecause why would you assign an ip via DHCP when there's already one assigned statically?Because you don't want to configure each RaspPi separately. "Statically" doesn't mean "locally configured". Statically means "every machine gets always the same IP address". You can do that with DHCP by looking at the MAC address of the machine. Imagine you had a thousand RaspPi's. Do you manage those individually? No, you manage them in a central location, and keep them otherwise identical.The reason is I don't know how to set dhcpcd back to go look for an address from dnsmasq.I don't get why you think you need to run dhcpd on the RaspPi's. If they need to get other information by DHCP, you need a DHCP client, not a DHCP server. If you want to configure each static address for them locally, then you again can do that without a DHCP server. If you in addition want to configure each DNS name for them locally by running a DHCP server on them, then this is not going to work. (Though you can make it work by running DHCP clients on them, and having them tell the central DHCP server (your router) their hostnames in the DHCP request). For DNS, you need to have a central server where all the information is.
I'm setting up a couple of Raspberry Pi's on my router's DMZ (don't worry all the ports are closed); my router uses DNSMasq for DNS and so I added the MAC addresses; hostnames and IPs of the pi's to the dhcp static leases. Now that said, I'm only learning to use dhcpcd; I'm used to the old way of using /etc/networking/interfaces to configure ip address assignment. On the pi's themselves, I've configured them with /etc/dhcpcd.conf as having a static ip address and pointed them at my DNSMasq DNS Server. It seems a little strange to do this, but is it okay to do so? This way my pi's get a DNS record (so the devices can find each other) and a static ip address; I suppose I could configure it so that it pulls the IP based on the MAC address using the dhcpcd client. That said I don't really know how to configure dhcpcd to pull it's ip address from DNSMasq; I'm planning on adding additional DNS records (maybe from /etc/hosts) for the pi's to pick up for separate nginx server blocks, so is it okay to have static IPs configured in dhcpcd while I have static DHCP leases configured? Or is that weird and I shouldn't do that?
Static IP and DHCP Lease in dnsmasq?
The answer about accessing NAT from an internal network is, more correctly: you do not want to do that because of:restrictions of consumer-grade technology; performance reasons - NAT uses more CPU resources and memory - albeit in a domestic scale it is not worrisome; routing more complex - either using and debbuging.The alternatives are:if accessing only from that local server, creating an host file entry; creating a name server, and creating views if a public DNS name that belongs to you - not the case you present, but usually in an enterprise; creating a name server, and creating a custom internal name, like ssh.home; using BIND+RPZ, and redefining the external name to your internal IP address; if doing routing with a Linux box, with iptables+NAT, capturing the SSH sessions to your external IP, and NATing them to your internal IP address.About my comments about capturing the IP address/creating the host file, see this answer how I deal at home with my DDNS address. Better method for acting on IP address change from the ISP? For BIND+RPZ see: Configure BIND as Forwarder only (no root hints), encrypted + RPZ blacklist / whitelist all together Large zone file for bind9 : ad-blocking
I have set up a no-ip account to access my ssh server at home remotely with myserver.ddns.net, which works well.. from outside only. From outside: Remote ping: $ ping myserver.ddns.net # successRemote ssh: $ ssh myserver.ddns.net # successFrom local: Local ping: $ ping 192.168.2.8 # successLocal ssh: $ ssh 192.168.2.8 # successRemote ping: $ ping myserver.ddns.net # success, resolving to 90.113.108.192Remote ssh: $ ssh myserver.ddns.net # loOong time waiting, then.. Connection closed by 90.113.108.192 port 22Why could it be so?
SSH and ddns: can connect remotely but not locally
You need to buy a domain name from domain register and you need to request your ISP provider for a static IP. Once you got the Domain Name and Static IP, you need to configure A record for the same in DNS server.
I want to know how do I make a website visible on internet hosted in my local machine using apache2 as a web-server. I'm completely new about this topics, and the thing is (just for practice and learn how it works) how can my IP resolve my domain name and once I type it on my browser can redirects me to my website. I know that once my IP address change, it won't be accesible because, it is a dynamic ip.
Hosted apache2 website visible on internet [closed]
If your external IP (to your local network) is a non-routable address (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) then you cannot do this. DDNS provides a name->ip mapping but your bigger problem is that you are behind NAT. For inbound connections to work you need forwarding rules on the NAT gateway, which if your ISP doesn't give you a real IP they aren't going to do. If your ISP won't give you a public address and you can't switch to a provider that does, you can still get around this issue. To do so you need a host on the internet you can establish a tunnel to so you can route traffic into your LAN.
My Internet provider uses NAT network to connect users. So, when I connect to the Internet, I got an 10.x.x.x IP address. Is there any way to access Unix device in such type of network? DDNS needs an external IP to get work, and I can't get it, even if I want. Any ideas? EDIT: And of course, the best solution is to get constant connection to the device.
DDNS unix device in 10.x network
You need no-ip.org to support your “www.” subdomain. You'll need to get the enhanced feature from no-ip.org for it to ever work. Or alternatively (might be even cheaper), buy your own domain name and make the domain and all the subdomains you want point to your single no-ip.org address.
<VirtualHost *:80> ServerAdmin webmaster@localhost ServerName domain.no-ip.org ServerAlias www.domain.no-ip.org DocumentRoot /var/www/main ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>These are my settings in /etc/apache2/sites-enabled/000-default. I have a webserver at home connected via "no-ip" (dynamic dns). When I try to go on "www.domain.no-ip.org" I get redirected to the site "http://navigationshilfe1.t-online.de/http://navigationshilfe1.t-online.de/dnserror?url=http://www.domain.no-ip.org/" of my ISP. The basename is dnserror. Nice little, very little, information. I don't know the mechanics of DNS. Can somebody tell me where the problem is?Server version: Apache/2.2.22 (Debian)
Apache subdomain can not be resolved
Just store the address in a variable and then you can ping that: $ foo=unix.stackexchange.com $ ping "$foo" PING unix.stackexchange.com (104.18.43.226) 56(84) bytes of data. 64 bytes from 104.18.43.226 (104.18.43.226): icmp_seq=1 ttl=58 time=5.12 ms 64 bytes from 104.18.43.226 (104.18.43.226): icmp_seq=2 ttl=58 time=10.5 ms 64 bytes from 104.18.43.226 (104.18.43.226): icmp_seq=3 ttl=58 time=8.05 ms ^C --- unix.stackexchange.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 5.120/7.901/10.532/2.212 msTo make it permanent, edit your ~/.profile file (or ~/.bash_profile if that file exists and you are using bash) and add this line (of course, change foo to whatever you want your variable to be called and change the URL to the name of your server): export foo=unix.stackexchange.comNow, from the next time you log in, you will be able to run ping "$foo" to ping, or echo "$foo" to print it out etc.
I have a DDNS service with noip.com, but the link I have is so hard to remember. It's working, I can resolve it using resolveip link and I get the current IP of the router. I tried to use /etc/hosts but it didn't work, it requires that I put an IP. How can I give a short name to the DDSN link I have. for example ping name , so the system would resolve name for the DDNS link. I wonder if NetworkManager or ip addr can help me. Thanks a lot.
short name a domain name
It could be done on a distributed system but that's not likely the case as it would make it more expensive and complicated to run. Even with a distributed system, you could never have enough servers to be perfectly aware of specific network outages. Furthermore, your total load and bandwidth would always be in function of the number of servers, which is very inefficient. It's most likely an extremely simple script that tries to open the site you ask from their server and if it gets any error returned, then it says the site is down. A simple version of such a script can be done in 15 minutes or less by an experienced programmer.
The Internet or Web as I understand it runs on variety of servers. The browser is the client which gives a URI or URL which tries to go through varied ways to connect to the web-page and render the web-page. But sometimes, there is a failure. Sometimes, when it fails you get a 404 or some other service number but sometimes you just get a connection timed out message. In either of the scenarios, people take the use of isup.me web service. If the site is up, then you try either changing browsers, using tor or some other way to access that web-site. My question are two-fold :- a. Does anybody have any idea how the isup.me web-service works ? The only way I see it happening is if it's either at an Internet Exchange Point (IXP) or near the backbone . So while it may not be possible to emulate this service totally, is there some sort of poor man's method to do a similar kind of service for self. The only way I see is having multiple service providers (ISP's) and hoping that all of them don't take the same path. Not a very effective methodology or is there any other ways ?
can a service like isup.me be duplicated on the system for self? [closed]
I found an alternative solution: I connected my router to No-IP, and it works now.
I have problems with my webserver. Normally you start the No-IP-DUC via sudo noip2. I tried to automate it with a cronjob. Using crontab -e, I created this file: # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any').# # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command@reboot cd /home/username/noip-2.1.9-1 && sudo noip2It does not work and I do not know why. It would be nice if somebody could help me.
Crontab for starting no-IP does not work [closed]
I finally end up using dnsspoof in conjunction to dnsmasq. Please tell if you have an alternative to dnsspoof.
I am sharing documents by running a hotspot in conjonction to dnsmasq that redirect all name queries to an IP <IP> where the documents can be found create_ap wlan0 wlan0 HereAreTheDocuments echo "address=/#/<IP>" >> /dev/dnsmasq.conf service dnsmasq startI need to force users connected to my hotspot to set my IP as their DNS. How can I force connected users to use the local DNS instead of a remote one? For instance lots of machine are using Google DNS at 8.8.8.8 and 8.8.4.4
How to force machines connected to an AP to use the local AP DNS?
My eyes jump straight to the fact that your file name has a pipe | in it. According to your output the file system type is exfat. FAT and its derivatives do not support inclusion of pipe, along with a few other things, in file names. If you were to rename the file to strip the problematic characters I’d imagine you’d have more success. There are a number of ways to do this en masse. That said, if HTML files have links to each other this would break the links, so you would have to do further work to fix the links. Another option would be to reformat the USB device as a more tolerant type, such as ext family. But this might hamper your ability to use the USB stick on a non Linux based OS, but I don’t know if that’s a consideration for you.
The sdc1 was mounted on /media/debian/Ventoy. debian@debian:~$ sudo blkid | grep Ventoy /dev/sdc1: LABEL="Ventoy" UUID="F82D-76BE" BLOCK_SIZE="512" TYPE="exfat" PTTYPE="dos" PARTUUID="1af31d46-01"debian@debian:~$ df /media/debian/Ventoy Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 15324256 7971552 7352704 53% /media/debian/VentoyShow content in directory material. ls material 'Best Semiconductor Stocks & ETFs in 2021 | The Motley Fool_files' 'How To Use AppImage in Linux [Complete Guide] - It'\''s FOSS_files' 'Best Semiconductor Stocks & ETFs in 2021 | The Motley Fool.html' 'How To Use AppImage in Linux [Complete Guide] - It'\''s FOSS.html'Copy it into /tmp: sudo cp -R material /tmpIt works fine. Then copy it into sdd1: sudo cp -R material /media/debian/Ventoy cp: cannot create directory '/media/debian/Ventoy/material/Best Semiconductor Stocks & ETFs in 2021 | The Motley Fool_files': No such file or directory cp: cannot create regular file '/media/debian/Ventoy/material/Best Semiconductor Stocks & ETFs in 2021 | The Motley Fool.html': No such file or directoryWhy can't I copy all files in the directory to a USB storage device?
Why can't I copy all files in a directory to a USB storage device?
TL;DR: udev and fuse are not really compatibleAfter noticing that this problem not only occurs with exfat but also with NTFS formatted devices I started looking specifically for problems with udev and fuse. Some comments about the combination I found:I think that the fuse process is being killed. You cannot start long-lived processes from a udev rule, this should be handled by systemd.(from Debian-devel)Warning: To mount removable drives, do not call mount from udev rules. In case of FUSE filesystems, you will get Transport endpoint not connected errors. Instead, you could use udisks that handles automount correctly or to make mount work inside udev rules, copy /usr/lib/systemd/system/systemd-udevd.service to /etc/systemd/system/systemd-udevd.service and replace MountFlags=slave to MountFlags=shared.[3] Keep in mind though that udev is not intended to invoke long-running processes.(from ArchWiki) And there are more. I ended up using the scripts and configuration files from this answer. It works perfectly with all filesystem types. I wish I had found this earlier, it would have spared me a couple of days of debugging, trial and error.
I'm trying to mount various SD cards automatically with udev rules. I started with these rules, solved a problem with the help of this question, and now I have the following situation: ext4 and vfat formatted devices work perfectly, but when I plug in an exfat or an NTFS formatted disk I get the following line in mount: /dev/sda1 on /media/GoPro type fuseblk (rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)And the directory listing looks like this: $ ls -l /media/ ls: cannot access '/media/GoPro': Transport endpoint is not connected total 0 d????????? ? ? ? ? ? GoProI can't do anything under that mountpoint, not even as root: $ sudo ls -l /media/GoPro ls: cannot access '/media/GoPro': Transport endpoint is not connectedThe only problems I can find from other people with the error message Transport endpoint is not connected seem to happen after a disk wasn't unmounted properly. But I have the problem while the disk is mounted. My current udev rules look like this: KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_auto_mount_end" ACTION=="add", PROGRAM!="/sbin/blkid %N", GOTO="media_by_label_auto_mount_end"# Do not mount devices already mounted somewhere else to avoid entries for all your local partitions in /media ACTION=="add", PROGRAM=="/bin/grep -q ' /dev/%k ' /proc/self/mountinfo", GOTO="media_by_label_auto_mount_end"# Global mount options ACTION=="add", ENV{mount_options}="noatime" # Filesystem-specific mount options ACTION=="add", PROGRAM=="/sbin/blkid -o value -s TYPE %E{device}", RESULT=="vfat|ntfs", ENV{mount_options}="%E{mount_options},utf8,uid=1000,gid=100,umask=002" ACTION=="add", PROGRAM=="/sbin/blkid -o value -s TYPE %E{device}", RESULT=="exfat", ENV{mount_options}="%E{mount_options},utf8,allow_other,umask=002,uid=1000,gid=1000"# Get label if present, otherwise assign one ENV{ID_FS_LABEL}!="", ENV{dir_name}="%E{ID_FS_LABEL}" ENV{ID_FS_LABEL}=="", ENV{dir_name}="usbhd-%k"# Mount the device ACTION=="add", ENV{dir_name}!="", RUN+="/bin/mkdir -p '/media/%E{dir_name}'", RUN+="/bin/mount -o %E{mount_options} /dev/%k '/media/%E{dir_name}'"# Clean up after removal ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/umount -l '/media/%E{dir_name}'" ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/rmdir '/media/%E{dir_name}'"# Exit LABEL="media_by_label_auto_mount_end"I tried using user_id and group_id instead of uid and gid but to no avail. Mounting the device manually works fine: $ sudo mount -o noatime,utf8,allow_other,umask=002,uid=1000,gid=1000 /dev/sdb1 /media/GoPro/ FUSE exfat 1.2.5 $ ls -l /media/ total 132 drwxrwxr-x 1 pi pi 131072 Jan 1 1970 GoPro
Mounting exfat with udev rules automatically
Thanks to the other posters for replying/suggesting. Here is my full solution. df -P can be used to obtain device from path, and that can be fed to lsblk --fs to obtain exact file system. So a one-liner is: fs=$( lsblk --fs --noheadings $( df -P $path | awk 'END{print $1}' ) | awk 'END{print $2}' )If all you need to know is that the file system is fuseblk --- which covers both ntfs & exfat and turns out in the end to be sufficient for my purposes after all --- this can be determined with the much simpler: fs=$( stat -f -c '%T' $path )
User has a (incremental) backup script using rsync, to external device. This was erroring on an SSD he had. Turns out his device was formatted exFAT. That means I need to detect this inthescript, as Ineed to alter the options to rsync (e.g., exFAT cannot handle symbolic links, no owner/group permissions, etc.). User is running LinuxMint. I run Ubuntu. I can only assume/hope that a solution for my Ubuntu will work for his Mint. I have looked at:How do I know if a partition is ext2, ext3, or ext4? How to tell what type of filesystem you're on? https://www.tecmint.com/find-linux-filesystem-type/There are a variety of good suggestions there, but I do not see one which meets my requirements, which are:Must report (parseable) ntfs/exfat explicitly, not just say fuseblk (which it will for both exfat & ntfs, I need to distinguish). Must not require sudo. Must be executable starting from a directory path on the file system (can assume it will be mounted), not just starting from a /dev/....From the suggestions I have tried:fdisk -l, parted -l, file -sL: require sudo and/or /dev/... block device mount: requires /dev/..., only reports fuseblk df -T, stat -f -c %T: accept directory, but report only fuseblk lsblk -f, blkid: require /dev/... block deviceIs there a single, simple command which meets all these criteria? Or, lsblk/blkid seem to report exfat/ntfs correctly, if I need to pass them the /dev how do I get that suitably from the directory path in script?
How to detect NTFS/exFAT file system type from script
chmod and chown will not work for mounted fat32, exfat and ntfs-3g, period. What you're looking for is dmask=0002,fmask=0113.
I am trying to mount an exfat drive using fstab with read/write permission for both user and group. The line of etc/fstab for this drive is: UUID=5E98-37EA /home/ftagliacarne/data/media exfat defaults,rw,uid=1000,gid=1001,umask=002 0 1Using these option the drive gets mounted to the correct location to the correct user and group, however, the group does not have read-write access. i.e. the permission are set to: drwxr-xr-x 7 ftagliacarne docker-media 262144 Sep 24 20:40 mediaIs there any way of setting the group permission to also have read-write access? Desired outcome: drwxrwxr-x 7 ftagliacarne docker-media 262144 Sep 24 20:40 mediaSome of the things I tried:Setting umask to 002 Using chmod before/after mounting Using chmod recursively on the parent directoryAppreciate any help you can give me. Update 1: I also tried changing the fstab file to the following: UUID=5E98-37EA /home/ftagliacarne/data/media exfat defaults,uid=1000,gid=1001,dmask=0002,fmask=0113 0 1Alas, it still does not work. Update 2: After having issues at boot due to the configurations above, I changed the /etc/fstab entry to the following: UUID=5E98-37EA /home/ftagliacarne/data/media exfat defaults,uid=1000,gid=1001,fmask=0113,dmask=0002,nofail 0 0And now it works. I suspect the issue was with the pass option being 1, as changing that to 0 seems to have fixed it. Thank you to everyone who helped!
ExFat mount permission
Running testdisk on the ddrescue image as per the instructions in this guide, I was able to recover all files. The initial quickscan did not detect anything useful, but after the quickscan, a deepscan option is available. Deepscan detected three partition file systems-ext4, exFAT, exFAText4 was labeled Linux. I did not try to recover anything from that partition. This is the partition that was mountable previously. The first exFAT was unlabeled, and I was able to browse through it using terminal commands provided by testdisk. Contained in this partition table, which other programs such as gparted were unable to see, were all of the GoPro folders and files, in pristine order. Within the DCIM folder, I found all of my photos and videos with correct file names and time stamps- so recovery was not a matter of restoring corrupted files at all. The second exFAT looked to be the same as the first, but the files were unreadable.
I have a 128 GB Micro SD Card that I formatted as ext4 and used in a Chromebook for an Ubuntu Chroot Environment. I used it for quite some time that way. At some point, I either deleted everything off of it or formatted it using the Chromebook's simple formatting system. After this, I stuck it in a GoPro Hero Session, and found that the GoPro didn't care to format the disk and could immediately write pictures and videos. No problem. I went on a trip, took lots of photos and video, and then suddenly the GoPro was having trouble reading the disk. It was still able to record video and pictures (I assume) as I could turn on the recording mode and it didn't report any problems. From what I could tell, 128 GB is too much for this GoPro Session. When I plug this into a computer (Chromebook, Mac OSX, Ubuntu) I either get an error (Chromebook & OSX) or I have the disk mount, but no viewable file structure when I open it with a file explorer. Totally empty. If I right click, and click Properties (on Ubuntu), I get a report that the disk is formatted ext3/ext4, 128 GB and has 45.1 GB used, 71.9 GB free space. gparted is reporting the same thing. I was able to successfully recover all 6 GB of photos using photorec. I didn't recover any videos, though. I've used ddrescue to duplicate the disk to an image that I can work with. When I mount the image file, it behaves exactly the same way as the disk does (expected). ddrescue output:rescued: 125829 MB, errsize: 0 B, current rate: 12648 kB/s ipos: 125829 MB, errors: 0, average rate: 19079 kB/s opos: 125829 MB, time since last successful read: 0 s FinishedI ran a pass on the .IMG file with foremost -v -q -t mp4 -d but it finished with 0 files returned. At this point, it doesn't actually seem to me that there has been either data loss or corruption. I'm not sure what actually is going on, but suspect that something has gone awry with the file system- being ext3/ext4 in a GoPro rather than FAT32 or exFAT. EDIT: I just used Disk Usage Analyzer and found all of the largest files that photorec recovered. Among them are many large .bz2 files, with files in them with no extension that are timestamped for the time I would have recorded the footage. I can open them and view this information with an archive manager, but am unable to extract them. EDIT 2: I tried running fsck and checked in /lost+found. All of my Linux files were there, but no videos, and not even the pictures that I had previously recovered with photorec. I also tried to mount the image as exfat using sudo mount -o loop -t exfat SD_Card.img ~/mountpoint but it fails to mount.FUSE exfat 1.2.8 ERROR: exFAT file system is not found.
SD Card Recovery without data loss or corruption
I just spent the better part of a day solving this problem. Apparently, Mac OS is quite picky about how the partition was created and with which flags. I was able to solve the problem by Converting the boot record to GPT using sudo gdisk /dev/sdx as suggested here. Just exit gdisk right away with w. It will warn about overwriting your drive. In my case answering with Y worked fine without losing data. Please make sure that you have backed up your date before doing this (no backup, no pity). Setting the msftdata data on the exfat partition (in my case partition number 1): sudo parted /dev/sdX and then set 1 msftdata on.Afterwards my Mac opened the partition without complaints.
I formatted an external hard disk on my ubuntu linux system with exfat.First I installed the exfat utilities: sudo apt-get install parted exfat-utils Then I partitioned the disk with a mbr boot record and one primary partition using parted Finally I formatted the partition with mkfs.exfat -n ShareDisk /dev/sdX1Then I copied about 300 GB of data onto the disk. Everything worked fine on my linux machine - so far so uneventful. However, when I plug the disk into my Mac, it says it cannot handle that file system and proposes to initialize or eject it. Now I explicitly chose exfat so the disk would work with any operating system and I have been successfully using exfat formatted disks on my Mac before.
Mac OS cannot mount exFAT disk created on (Ubuntu) linux
No. The same applies to NTFS and FAT32. Actually AFAIK off all filesystems that Linux supports, only ext4 can be defragmented (only individual files one by one), and XFS (full defragmentation available). As a last resort you could install a trial version of Windows 10 Enterprise and defragment from it. There is no built-in defrag tool in Windows either, but there are some third party tools, e. g. Defraggler, O&O Defrag and UltraDefrag. Defragmenting SSD/NVMe storage is generally not recommended (possible inessential wearing of flash erase blocks). Some fragmentation issues are only specific to revolving HDD (seek time), some can be experienced also on SSD when filesystem susceptible to fragmentation is used. In Linux-only environment a cycle of backup, re-format and restore may be the only (or easiest) option.
I came across a SSD which have a very significant performance drop (about 20 times). As there is an ExFAT filesystem used, I suspect it might be due to fragmentation. Is there a tool available in the open source / free software world (= permissive or affordable license) to de-fragment the filesystem? Yes, I know about the Good old way of backing up, reformatting and putting back. In this case it might be quite lengthy (some TBs of data in an embedded measurement system).
Is there a defragmentation utility for ExFAT available in GNU/Linux world?
The mount fails because shortname is not a supported option for exfat (that's a vfat option). Remove it from your fstab and you should be able to mount the device.wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error.In general, if you get this error (and the device you are trying is formatted to a supported filesystem), you should always check the kernel log for the "other error" part, in this case you should see something like:kernel: exfat: Unknown parameter 'shortname'
I am running Fedora 35 and am trying to mount an exFAT drive, specifically an SD memory card for my digital camera. The computer identifies the card reader as device /dev/sde1 and I am trying to use /lacie2 as the mount point. This works correctly: sudo mount /dev/sde1 /lacie2 and I am able to access the drive. Typing mount shows the drive as: /dev/sde1 on /lacie2 type exfat (rw,relatime,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro)However, I tried writing an fstab entry to /etc/fstab as: /dev/sde1 /lacie2 exfat user,noauto,shortname=lower 0 0so I could mount the drive directly with sudo mount /lacie2. This doesn't work but gives the error: mount: /lacie2: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error. What is the reason here? As Fedora obviously seems to be able to mount exFAT drives, why must I explicitly specify the device?
Mounting exFAT drive on Fedora 35 requires specifying device
Since Debian 11, exFAT is supported by the kernel. exfat-utils has been replaced by exfatprogs, you should install the latter instead. exfat-fuse is still available should you need it. To mount an exFAT file system with the kernel driver, use mount -t exfat /path/to/device /path/to/mountpointas usual; to mount it using the FUSE driver, use mount.exfat-fuse /path/to/device /path/to/mountpoint
$ apt-get install exfat-utils exfat-fusereturns as output Reading package lists... Done Building dependency tree... Done Reading state information... Done Package exfat-utils is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another sourceE: Package 'exfat-utils' has no installation candidateTried to install this package but it seems that is missing, is the exfat support made structural inside the kernel already so isn't necessary refer to other utilities to handle this fs ? do the command for mounting a drive be the same old mount -t exfat /dev/sda1 /mountpoint/? Thanks
Is exfat-utils missing in debian 12?
exfat behaves just like vfat and since it has no concept of permissions, chown and chmod both won't work. You have to specify mount options such as uid, fmask and dmask, e.g. defaults,noatime,nofail,uid=1000,fmask=0133,dmask=0022(run id to find out what your ID is).
Fresh install Ubuntu Server 20.04. cat /proc/filesystems shows exfat in the output. Not installed any other packages for exFAT as it should work from kernel. Mounted 2 internal HDDs on in fstab as below #INT-1TB-4K Internal HDD mount to /mnt/INT-1TB-4K UUID=0E7E-6579 /mnt/INT-1TB-4K exfat defaults, permissions 0 0#INT-1TB-BAK Internal HDD mount to /mnt/INT-1TB-BAK UUID=3037-96B0 /mnt/INT-1TB-BAK exfat defaults, permissions 0 0/mnt ls-all gives exharris@plexserv:/mnt$ ls -all total 520 drwxr-xr-x 4 root root 4096 Jul 2 09:32 . drwxr-xr-x 20 root root 4096 Jul 2 05:15 .. drwxr-xr-x 9 root root 262144 Jul 3 03:49 INT-1TB-4K drwxr-xr-x 7 root root 262144 Jul 3 03:49 INT-1TB-BAKI get permission denied errors in the terminal when trying to create files in these folders (unless I use 'sudo', of course). This is because the 'others' write bit is set to -. When running sudo chmod -R 777 INT-1TB-4K from /mnt, I get no errors, but when doing ls -all again, nothing has changed. This is causing me problems also as I have set these up as Samba shares and also cannot write to them from other machines. I also tried sudo chmod -R o+w INT-1TB-4K - same thing happened. What is going on? I do not want to use exfat utils and fuse.
Native exFAT support in 5.4 kernel - issues?
/mnt/hdd is an ExFAT filesystem, which does not actually have a concept of Unix-style file ownerships nor permissions, and so cannot store them. This is why your chown command is failing. The ownerships and permissions displayed by ls -l are actually created on-the-fly by the exfat-fuse driver according to the mount options. Since the default list of mount options includes allow_other, the driver is currently allowing full access to all the files and directories in this filesystem to any user on the system. You could use the id www-data command to display the user and group ID numbers of the www-data user. If the www-data has UID of 33 and primary GID of also 33, you could change your /etc/fstab line to: /dev/sda1 /mnt/hdd exfat-fuse default_permissions,allow_root,uid=33,gid=33,nosuid,nodev,relatime,blksize=4096 0 0Then unmount & re-mount the filesystem: umount /mnt/hdd mount /mnt/hddNow all the file and directory ownerships and permissions in the /mnt/hdd filesystem should have changed. Note that this kind of Unix ownership and permission emulation for filesystems that don't have the capability to store Unix-style ownership/permission information is restricted to what you can specify with mount options: usually, it means that all the files and all the directories in that filesystem will have a single, fixed set of ownership/permission settings and they cannot be changed with chown/chmod commands at all. If this is too inflexible for you, I'm afraid the only option would be to use another filesystem type. It this is a temporary setup, using an ExFAT filesystem to hold web server data (as indicated by the username www-data) might be fine. But if this is supposed to be a permanent setup, you should seriously consider reformatting /dev/sda1 to another filesystem type that allows native Unix-style file ownerships and permissions before starting to use it.
I don't have permission to chown the mounted directory /mnt/hdd. I am currently logged in as root. The ls -l output is: rwxrwxrwx 1 root root 131072 Jan 1 1970 hddI am mounting it via fstab config: /dev/sda1 /mnt/hdd exfat-fuse defaults 0 0I am trying to assign the owner of that drive to www-data via that command: root@owncloud:/mnt# chown -R www-data:www-data hddand it says I don't have permission to do that. mount command output: /dev/sda1 on /mnt/hdd type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
No permission to chown /mnt/hdd
The relationship is indeed the same as for other file system tools: exfat-utils provides tools to create, check (and repair), label, and dump ExFAT file systems. Like many other file system tools, they operate directly on the target devices, without using the kernel’s driver (if any); that’s one of the reasons why file systems need to be unmounted before they can be operated on by the utilities. The kernel driver allows the kernel to mount ExFAT file systems, making their content available to programs running on the system.
From a Phoronix article:Long story short, with Linux 5.7 is a much better Microsoft exFAT file-system implementation that is more reliable and with more functionality than the older driver while it will continue to receive improvements by Samsung and others.What is exfat-utils is needed for if the Linux kernel itself supports exFAT? Is the relation the same with other filesystem-utils/filesystem-tools and their respective kernel drivers?
What is the relation between exfat-utils and the exFAT kernel driver?
ExFAT filesystems don't support Unix permissions. The Unix permissions are set at mount time. The ownership/permissions of the mountpoint (/mnt/USB) has nothing to do with whatever gets mounted over it. It's just a placeholder in the file tree. To fix it now, try: sudo mount -o remount,umask=0,dmask=0,fmask=0,uid=$(id -u),gid=$(id -g) /dev/sdb2 /mnt/USBUpdate your /etc/fstab entry to add the fmask=0 and uid= and gid= options. You'll have to hard-code your UID and GID, with the values from id -u;id -g.
I have a folder under /mnt/ with drwxrwxrwx permissions and under root:root I then mount a USB drive (exFAT) to this folder and it becomes drwxr-xr-x The issue is that now I cannot scp to that folder via WinSCP since there is no permission for group to write to folder, and I am unable to scp as root user. I am mounting the drive via fstab with the following: /dev/sdb2 /mnt/USB exfat defaults,dmask=0000,umask=0000,rw 0 0How do I either: 1) Give permission to group write or 2) Mount it as a non root user so that that user can write? ive attempted chown and chmod to no avail. Chown even when run as root returns Operation not permitted I am able to write to the mount as root user when in SSH (such as mkdir), so the mount is writable, but only by root.
Have drwxrwxrwx permissions on folder, but after mounting to it it becomes drwxr-xr-x which disalows members of the group to write. How do I fix it?
There is a tool which some people have successfully used to convert Ext4 partitions to exFAT in place, fstransform. Note that the tool doesn’t officially support conversions to exFAT, and I haven’t tried it — but there are apparently reports of it working (with the --force-untested-file-systems flag). In any case you should have a backup of your file before attempting this, in which case you might as well reformat and restore your file from backup.
I have a 2tb hard drive containing gpt and a single 2tb partition with ext4 file system. The partition has one 1.5tb file inside it. I want to change the type of file system of this partition from ext4 to exfat without deleting the 1.5tb file. Can I do that without writing a custom program?
Change the file system of a partition without deleting its content
Fuse was added on 2005-09-09, that's probably Linux ~2.6.18, far earlier than Linux 5.4Does incorporation of support for exfat filesystems mean that the exfat-fuse package is no longer required?Both can be used but exfat-fuse has essentially been deprecated and superseded.There is no mention of a filesystem-specific manual for exfat, nor is there a "Mount options for exfat" sub-section.The man pages are not always kept in sync with what the kernel contains. There's a separate team maintaining them.Should users rely upon the "Mount options for fat" sub-section in man mount, or should they rely upon man mount.exfat-fuse, or on something else?Mount options for fuse-exfat and the kernel native exfat driver are not related. They can be similar/the same but that's just happenstance. You think of these projects as similar/related while they are only similar in name and functionality. Code bases are different and written by different people.
I have read that support for the exfat filesystem has been incorporated in the Linux kernel since kernel ver 5.4 was released in late 2019 - early 2020. I'm confused about what this means wrt the exfat-fuse package. AFAIK, the exfat-fuse package existed prior to kernel ver 5.4, and was the ad-hoc method for mounting exfat partitions. Does incorporation of support for exfat filesystems mean that the exfat-fuse package is no longer required? Conversely, if exfat-fuse is still required, what was meant/accomplished by incorporating exfat support in the kernel? A related question is wrt the documentation for this - specifically man mount, and its FILESYSTEM-SPECIFIC MOUNT OPTIONS section. There is no mention of a filesystem-specific manual for exfat, nor is there a "Mount options for exfat" sub-section. Which leads me to ask, "Where are these mount options for exfat covered?" Should users rely upon the "Mount options for fat" sub-section in man mount, or should they rely upon man mount.exfat-fuse, or on something else?
Kernel-mounted vs FUSE-mounted exfat filesystem
Install or symlink it as /sbin/mount.exfat. (I checked strace -f mount -t nosuchfs nowhere nowhere. It tries /sbin/mount.nosuchfs, /sbin/fs.d/mount.nosuchfs, and /sbin/fs/mount.nosuchfs only). What's the worst that could happen :). If you forget and try to apt install exfat-fuse again, it's either going to give you a nice error message to remind you, or overwrite it.
after removing the default exfat-fuse package version 1.2.5 from my Debian Stretch system and replacing it with version 1.3.0, compiled from source, running mount using type exfat results to an unknown filesystem error. Checking with /proc/filesystems reveals that exfat is not listed. Manually mounting exfat drives with mount.exfat works fine, the executables reside in /usr/local/sbin. How can I configure mount to use mount.exfat when appropriate?
Configure mount to recognize self compiled fuse exfat
Per @cat's comment, posting my comment as an answer - Have you considered making a sparse file the size of your old installation, formatting it as a ext4 file system, and mounting on loopback, then copying to that? Would solve all the permissions loss, etc. issues. exFAT's filesize limit is 16EiB, surely large enough. And per @cat's comment back to me, apparently a single file big enough won't be an issue ...
I need to backup / copy the files of my Linux installation to an external drive, so that I can restore them onto the new, larger disk. The destination disk for the restoration is twice as large, and will have larger partitions, ext4 and linux-swap. Imaging the entire disk or its first partition is not really a good option, because both require later re-partitioning I'd like to avoid. I am backing up to an exFAT-formatted drive, there are some issues with copying an ext4 Linux installation to exFAT thoughmay destroy important hard links and fast* symbolic links from the ext4 file system (will break Linux) won't preserve file ownership / permissions and setuid bits (will break Linux) won't preserve capabilities (will break Linux) won't preserve files extended attributes (xattrs) as well, as I believe many files have important information there (I don't care about Unix ACLs as I don't think I have any files using them)If I copied the files directly to NTFS, FAT32, exFAT, etc, much of this metadata would be destroyed. I don't care about compression since the original disk is smaller than my backup drive, but (GNU) tar seems to preserve only permissions/ownership (with -p and extract with --same-owner), links and xattrs, but file capability support is needed to backup modern Linux. It seems the other main options are a CloneZilla Live system, and cpio which seems to create tar archives. So the main options are CloneZilla or just imaging the parition tar itself, which may break things cpio, which may be limited by the tar archive format?*80,000 of the 83,000 symlinks are fast symlinks, and I'd like to preserve their fast-ness if possible
Backing up Linux to a Windows file system for later restoration
It turns out that allow_utime does work with the kernel exFAT driver, but not the olf FUSE driver as far as I can tell. My real issue was that I was using FUSE to mount the filesystem. After uninstalling exfat-utils, the OS mounted the drive using the kernel driver instead of FUSE, and it was able to use allow_utime just like vfat.
Considering the fact that exfat does not store ownership information of files, is it possible to mount an exfat partition in Linux with an allow_utime option that is also available for vfat? If not, is there a way to allow any process to use utime on any file in the filesystem? I found an answer to this here, but this only applies to vfat. For the same reasons (no ownership information) it should theoretically also work with exfat, but I couldn't find any more information about it, and it didn't seem to work. For Context: I have an ARM based laptop (Pinebook Pro) that has 64GB of internal memory and expandable storage via SD card. I would like to use Dropbox with it, but because this is an ARM laptop, I have to use alternative clients like Maestral. The internal storage is too small, so I opted to have it work with my 128GB SD card. I would also like to make the SD card portable and work with >4GB files, so I formatted it with exfat which should now have first class support in the kernel and avoids permission issues. Maestral needs to be able to use the utime() command to modify the access times of each file as it syncs with Dropbox, but it throws errors when I do it in the exfat filesystem, because it does not have permission. So this question is really trying to find a solution to that.
Using allow_utime with exfat
You may want to try jhead instead which does that out-of-the-box (with a, b... z suffixes allowing up to 27 files with the same date) and doesn't have the stability issue mentioned by @meuh: find . -iname '*jpg' -exec jhead -n%Y_%m_%d__%H_%M_%S {} +Or using exiftool (example in man page): exiftool -ext jpg '-FileName<CreateDate' -d %Y_%m_%d__%H_%M_%S%%-c.%%e .(here with %-c being a numerical suffix starting with -; added if the file already exists) (Contributed to answer and corrected from the comments): In case you want to preserve the original filename, simply add _%%f: exiftool -ext jpg '-FileName<CreateDate' -d %Y_%m_%d__%H_%M_%S%%-c_%%f.%%e .
If I rename images via exiv to the exif date time, I do the following: find . -iname \*jpg -exec exiv2 -v -t -r '%Y_%m_%d__%H_%M_%S' rename {} \;Now it might happen that pictures have exactly the same timestamp (including seconds). How can I make the filename unique automatically? The command should be stable in the sense that if I execute it on the same directory structure again (perhaps after adding new pictures), the pictures already renamed shouldn't change and if pictures with already existing filenames are added the new filenames should be unique as well. My first attempt was just to leave the original basename in the resulting filename, but then the command wouldn't be stable in the sense above.
Rename images to exif time: Make unique filenames
The other ExifTool suggestions are great if you want to remove or change specific sections. But if you want to just remove all of the metadata completely, use this (from the man page): exiftool -all= dst.jpg Delete all meta information from an image.You could also use jhead, with the -de flag: -de Delete the Exif header entirely. Leaves other metadata sections intact.Note that in both cases, EXIF is only one type of metadata. Other metadata sections may be present, and depending on what you want to do, both of these programs have different options for preserving some or removing it all. For example, jhead -purejpg strips all information not needed for rendering the image.
How can I recursively remove the EXIF info from several thousand JPG files?
Batch delete exif info
You can to it for all files using a for loop (in the shell/in a shell-script): for i in *.JPG; do j=`jhead "$i" | grep date | sed 's/^File date[^:]\+: \(.\+\)$/\1/'`.jpg echo mv -i "$i" "$j" doneThis is just a very basic outline. Delete echo when you have verified that everything works as expected.
Let's say I have a bunch of photos, all with correct EXIF information, and the photos are randomly named (because of a problem I had). I have a little program called jhead which gives me the below output: $ jhead IMG_9563.JPGFile name : IMG_9563.JPG File size : 638908 bytes File date : 2011:02:03 20:25:09 Camera make : Canon Camera model : Canon PowerShot SX210 IS Date/Time : 2011:02:03 20:20:24 Resolution : 1500 x 2000 Flash used : Yes (manual) Focal length : 5.0mm (35mm equivalent: 29mm) CCD width : 6.17mm Exposure time: 0.0080 s (1/125) Aperture : f/3.1 Focus dist. : 0.29m ISO equiv. : 125 Exposure bias: -1.67 Whitebalance : Manual Light Source : Daylight Metering Mode: pattern Exposure Mode: ManualNow I need to rename all the photos in the folder in the next format: 001.JPG 002.JPG 003.JPG ...Where the minor number would be the older image, and the maximum the newer. I'm not so good scripting, so I'm asking for help. I think a bash script is enough, but if you feel more comfortable, you can write a python script. I thought in something like: $ mv IMG_9563.JPG `jhead IMG_9563.JPG | grep date`but I don't know how to do that for all the files at once.
How can I rename photos, given the EXIF data?
You can use the -g flag to output only the property you're interested in, and -Pv to print the value without any surrounding fluff. The result is easy to parse. IFS=': ' set $(exiv2 -g Exif.Image.DateTime -Pv DSC_01234.NEF) unset IFS year=$1 month=$2 day=$3 hour=$4 minute=$5 second=$6It may also be helpful to change the file date to match the image date: exiv2 -T DSC_01234.NEF.
How do I print the image Exif date with a tool like exiv2? My goal is to write the image year and month into separate variables. Do I really have to parse the output with regex or is there a alternative to something like this: exiv2 DSC_01234.NEF -ps | grep 'Image timestamp' | ...regex to parse the date
Print specific Exif image data values with exiv2
What you are getting there is not the time at which the photo was taken. It is the time at which the 123.jpg file was last modified (or created/uploaded onto the server). That information comes from the web server, which gets it from the file's timestamps. The photo could very well be 10 years older than what you get. Actually, it wouldn't be too difficult to make it look like it comes from the future! The information you're looking for is (optionally) stored into the image's metadata. From a webserver's perspective, that's part of the file's actual content, which means an HTTP HEAD request will not be enough. First, you need to download the file. Let's use that picture of a boat as an example. If you run your curl command, you'll see that the file was last modified on the 13th of October 2015. Once you've downloaded the file, this date will also appear in the file's timestamps (provided you preserve them accross downloads, which I believe wget does). Now once you've got the file, all you need to do is access its metadata. On Linux, I'd say identify and exiftool are the most popular choices for that. In your case: $ identify -format "%[EXIF:*GPS*]" image.jpg $ exiftool -gpslatitude -gpslongitude image.jpgNote that you don't really have to download the contents into an actual file in order to run these checks. You could easily use a pipe: $ curl http://example.com/image.jpg | identify -format "%[EXIF:*GPS*]" - $ curl http://example.com/image.jpg | exiftool -gpslatitude -gpslongitude -An important note though: geotagging is not a systematic process. Not all images have GPS coordinates embedded in them. If the device used to take the picture is GPS-enabled and actually writes GPS metadata, then you're good (smartphones can do that...). Otherwise, these two commands will not return anything, in which case you'll have to be more creative to determine the location...
curl -s -v -X HEAD http://sitename.com/123.jpg 2>&1 | grep '^< Last-Modified:' gets me a date. Any way I can retrieve gps coordinates of an image? Any other metadata?
Extract latitude/longitude from an image using curl
Use exiftool instead: exiftool -ext '' '-filename<%f_${ImageSize}.${FileType}' .Would rename all the images in the current directory (.).
I can't make this work. I have a lot of images and i want to rename his name and append image size to name using exiv2 exiv2 pr * prints all info about file # exiv2 pr 9b523e5a002268fe5067a928 File name : 9b523e5a002268fe5067a928 File size : 356433 Bytes MIME type : image/jpeg Image size : 1920 x 1200Now i want rename my file to look like 9b523e5a002268fe5067a928_1920x1200.jpegI already make something like this exiv2 pr * | grep "Image " | awk -F':' '{ print $2 }' | sed 's/ //g'It gives me the image size, but how do i extend this to get the image MIME type to get the .jpeg correct ?
How to rename all files and add image size to file name
With exiftool: exiftool -r . > exif.txt(remove the -r if you didn't intend to recurse into sub-directories). Note that GPS data usually is in EXIF tags.
Is there a way to extract EXIF information of all images within a directory (into an output file)? Preferably I also need GPS data but this is not essential. I only ask, as I have a number of directories with a large number of image files within, so automating the EXIF extraction would be useful.
Collect EXIF data of a directory
It is probably as you suspected, just a minor change to fix, try instead: if [[ ! -z "$image" ]]; thenExplanation Let's say when there is a match by exiftool and grep, then your $image variable contains this: abcabcabcBut when there is no output, $image contains: (nothing) In your test condition, you had: if [[ ! -z "$image// }" ]]; thenSo, in the first case bash sees this: if [[ ! -z "abcabcabc// }" ]]; thenBut in the second case, bash sees this: if [[ ! -z "// }" ]]; thenThe test is saying if "// }" is NOT zero-value, then... however "// }" is always going to be not zero value, it is something, it is a string consisting of two slashes a space and a curly brace. So since there is something there, something not zero-value, that is why the then part is triggered even when you have no matches in $image. So by removing this // } it should work.
I am having issues when trying to return the one image file that fits the parameters. $1 is the search parameter, in this incident it is "real" which is a tag on one of the images (not two) in the given folder. What happens when I call it "./test.sh real" is it prints off both images, rather than just the one. I imagine it has to deal with me setting up the function return as a variable and/or my condition statement, but I am not quite sure. #!/bin/bash for f in specim/*.jpg do image=$(exiftool -EXIF:XPKeywords $f | grep "$1") if [[ ! -z "$image// }" ]]; then echo $f fiwhat's returned: ../../test.sh real specim/image2.jpg specim/image.jpgThis bash script prints off what I want, only the one image rather than both (as the exiftool stuff which I don't want, but this was just a test): #!/bin/bash for f in *.jpg do exiftool -EXIF:XPKeywords $f | grep $1 doneresult: ./test2.sh real XP Keywords : name;realany help would be appreciated and it's probably super simple... Thanks
Bash script condition always passing even when grep should return nothing
Use ANSI C style escape sequence, $'\n' to indicate newline: % echo "$datetime" ExifMnoteCanon: Loading entry 0xcf27 ('(null)')... ExifMnoteCanon: Loading entry 0x3ca8 ('(null)')... ExifMnoteCanon: Loading entry 0xf88a ('(null)')... 2013:08:22 18:01:16% echo "${datetime##*\n}" ull)')... 2013:08:22 18:01:16% echo "${datetime##*$'\n'}" 2013:08:22 18:01:16As you can see otherwise \n is being treated as literal n.
The output from exif looks like this: ExifMnoteCanon: Loading entry 0xcf27 ('(null)')... ExifMnoteCanon: Loading entry 0x3ca8 ('(null)')... ExifMnoteCanon: Loading entry 0xf88a ('(null)')... 2013:08:22 18:01:16In my bash script, I store this in a variable: datetime="$(exif --debug --machine-readable --tag=DateTimeOriginal "$file" 2>&1)"I want to extract the last line of this using bash parameter substitution. I thought this would work: datetime="${datetime##*\n}"But the output is then: ull)')... 2013:08:22 18:01:16Why doesn't this work and how can I fix it?
Extract last line of multiline string
The tool you're looking for is called exiftool. You can use it to read & write exif meta data that's attached to a single image or a whole directories worth of files using its recursive switch (-r). To change the camera model you can use the -model=".." switch. Example Here's an image before the change. $ exiftool ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg ExifTool Version Number : 9.27 File Name : ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg Directory : . File Size : 2.1 kB File Modification Date/Time : 2013:12:31 14:18:44-05:00 File Access Date/Time : 2013:12:31 14:18:44-05:00 File Inode Change Date/Time : 2013:12:31 14:18:44-05:00 File Permissions : rw------- File Type : JPEG MIME Type : image/jpeg JFIF Version : 1.01 Resolution Unit : None X Resolution : 1 Y Resolution : 1 Comment : CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 95. Image Width : 50 Image Height : 50 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 50x50To change the model of my camera. $ exiftool -model="sam's camera" ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpgNow when we recheck the tags. $ exiftool ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg ExifTool Version Number : 9.27 File Name : ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg Directory : . File Size : 2.3 kB File Modification Date/Time : 2013:12:31 14:19:14-05:00 File Access Date/Time : 2013:12:31 14:19:14-05:00 File Inode Change Date/Time : 2013:12:31 14:19:14-05:00 File Permissions : rw------- File Type : JPEG MIME Type : image/jpeg JFIF Version : 1.01 Exif Byte Order : Big-endian (Motorola, MM) Camera Model Name : sam's camera X Resolution : 1 Y Resolution : 1 Resolution Unit : None Y Cb Cr Positioning : Centered Comment : CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 95. Image Width : 50 Image Height : 50 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 50x50There is another tool called exiv2 which does the same kinds of things as exiftool in case you're interested. Referencesexiv2 website ExifTool website
I have analog camera and I give a film to the lab where they scan it, I wanted to upload it to flickr but want to change info about camera. Right now it's NORITSU KOKI QSS-32_33 and I wanted it to be pentax k1000 (I don't want to clear exif data). How can I do this from command line.
How to change camera info in Exif using command line
Here is a bash script that works. It is basically what you have with a few tweaks: #!/bin/bash set -o pipefailfind . -type f -name "*.mp3" -print0 | while IFS= read -r -d '' file; do BITRATE=$(exiftool -AudioBitrate "$file" | grep -Eo '[0-9]+ kbps' | sed 's/ kbps//') if [[ $? -eq 0 ]] && [[ $BITRATE -ge 320 ]]; then echo $BITRATE "$file" fi doneIn setting the $BITRATE variable I run exiftool through a pipe directly and use $(...) to capture the output. Then, in the conditional I check if the exiftool -> grep pipe was successful and the bitrate is sufficiently high using Bash's numeric comparison operators. I've checked that it handles some random .mp3 files I have lying around, including ones with spaces in the name.
I'd like to get a list of all mp3 files with >320 bitrate. I'm not sure, how to apply the regular expression to the output of exiftool -AudioBitrate command. find . -type f -name '*.mp3' -print0 | while IFS= read -r -d '' i; do BITRATE=echo $(exiftool -AudioBitrate "$i")| grep -q '#([0-9]+) kbps#'; if $BITRATE > 320 then echo $BITRATE "$i" fi done
List all mp3 files having over 320 kbps bitrate using bash
This uses ffmpeg (sudo apt install ffmpeg to install) and works on your exact file names. It replaces your old files with new ones with the metadata set. Maybe try WITHOUT the && mv "~$f" "$f" part first: $ for f in *.mp4; do ffmpeg -i "$f" -metadata creation_time="${f:0:4}-${f:4:2}-${f:6:2} ${f:9:2}:${f:11:2}:${f:13:2}" -codec copy "~$f" && mv "~$f" "$f"; doneCheck metadata with: $ ffprobe -v quiet 20190228_155905.mp4 -print_format json -show_entries stream=index,codec_type:stream_tags=creation_time:format_tags=creation_time
On a Linux system, I have a bunch of MP4 files named like 20190228_155905.mp4 but with no metadata. I've previously had a similar problem with some jpg's which I solved manually with exiv2 -M"set Exif.Photo.DateTimeOriginal 2018:09:18 20:11:04" 20180918_201104.jpgbut as far as I can see, the DateTimeOriginal is only for images, not videos. Videos that do have metadata have aXmp.video.MediaCreateDate field that seems like what I want. I guess it contains aUnix timestamp, so I'd need a way to get the date from the filename, convert it to aUnix timestamp and set that value to Xmp.video.MediaCreateDate. Is that all correct? Or am I overcomplicating things? Edit: If I wasn't clear, I want to set creation date metadata on mp4 files using its filename that contains the date, so that programs can sort all my media files by their metadata
Batch set MP4 create date metadata from filename
If you have the same number of photos as there are lines in the CSV file, then you can use a simple for loop: for photo in *.png; do IFS=" " read -r latitude longitude altitude time compHeading gimbHeading gimbPitch exiftool -GPSLongitude="$longitude" -GPSLatitude="$latitude" "$photo" done < test2.csv
I have a drone that I used to make a flight movie, and I am going to use this footage to build a DEM (digital elevation model) of the topography I was filming. I can extract frames from the movie easily enough, but the method (ffmpeg) does not give these frames the lat-lon-elev-etc information necessary to reliably build the DEM. All this data is available in a .csv file stored in the drone flight control app, which I have downloaded. I want to extract from this .csv file all the columns of navigational data. I can do this using awk. Then I want to write a script that will attach the navigational data from a certain timestamp in the flightpath to a corresponding still frame extracted from the movie (at the same timestamp). I can use exiftool for attaching GPS data to an image, but being quite new to shell scripting I cannot get my current nested loop to work. Currently, my script writes all lines from the .csv file to every picture in the folder. Instead, I want to write line1 (lat-lon-elev-etc) to photo1, line2 to photo2, and so on. I feel I should be able to fix this, but can't crack it: any help very welcome! # Using awk, extract the relevant columns from the flightpath dataset awk -F, '{print $1,$2,$3,$7,$15,$22,$23 }' test.csv > test2.csv# Read through .csv file line-by-line # Make variables that can be commanded while IFS=" " read -r latitude longitude altitude time compHeading gimbHeading gimbPitch do# exiftool can now command these variables # write longitude and latitude to some photograph for photo in *.png; do exiftool -GPSLongitude="$longitude" -GPSLatitude="$latitude" *.png done# Following line tells bash which textfile to draw data from done < test2.csv
Shell script to add different GPS data to series of photos
TL;DR You cannot define your own ID3Tags, you must us the ones defined in the spec. Since a tag for Audio Bitrate is not defined, you're out of luck. That is not a problem with other audio containers (ones which use a different tag/comment system). Your major problem is that ID3 tags are a fixed specification. The best you can get is to write inside the UserDefinedText tag. Let's try this using ffmpeg, let's use the anthem of Brazil which I find quite amusing (and it is copyright free) as an example: $ wget -O brazil.mp3 http://www.noiseaddicts.com/samples_1w72b820/4170.mp3 $ exiftool -s brazil.mp3 ... Emphasis : None ID3Size : 4224 Title : 2rack28 Artist : Album : Year : Comment : Genre : Other Duration : 0:01:10 (approx)OK, we already have some tags in there. ffmpeg time: $ ffmpeg -i brazil.mp3 -c:a copy -metadata Artist=Someone -metadata MyOwnTag=123 brazil-tags.mp3 $ exiftool -s brazil-tags.mp3 ExifToolVersion : 10.20 ... Emphasis : None ID3Size : 235 Title : 2rack28 Artist : Someone UserDefinedText : (MyOwnTag) 123 EncoderSettings : Lavf57.41.100 Album : Year : Comment : Genre : Other Duration : 0:01:11 (approx)To make a comparison against a more flexible format (you should actually use some encoder parameters to get decent audio, but we are not interested in audio): $ ffmpeg -i brazil.mp3 brazil.ogg $ exiftool -s brazil.ogg ... Vendor : Lavf57.41.100 Encoder : Lavc57.48.101 libvorbis Title : 2rack28 Duration : 0:00:56 (approx)And now tagging with ffmpeg: $ ffmpeg -i brazil.ogg -c:a copy -metadata MyOwnTag=123 -metadata MyExtraThing=Yay brazil-tags.ogg $ exiftool -s brazil-tags.ogg ... Vendor : Lavf57.41.100 Encoder : Lavc57.48.101 libvorbis Title : 2rack28 Myowntag : 123 Myextrathing : Yay Duration : 0:00:56 (approx)And we have the tags. This is because Vorbis Comments are allowed to be anything, contrary to ID3Tags which have only a number of allowed values (tag names). You do not need ffmpeg to use Vorbis Comments. vorbiscomment is much simpler to use, for example: $ vorbiscomment -a -t EvenMoreStuff=Stuff brazil-tags.ogg $ exiftool -s brazil-tags.ogg ... Vendor : Lavf57.41.100 Encoder : Lavc57.48.101 libvorbis Title : 2rack28 Myowntag : 123 Myextrathing : Yay Evenmorestuff : Stuff Duration : 0:00:56 (approx)Extra note: FLAC uses vorbis comments as well. References:ID3v2 spec: List of possible ID3v2 tags
I know how to change a tag value, and how to extract tag values of a file from its metadata, and yes we have great tools like id3tag, exiftool, ffmpeg and etc. But I need to add a completely new tag, not change an existing one. For example, consider a situation that we have a .mp3 file and it has 4 tags for its metadata: 1. Artist 2. Album 3. Genre 4. File SizeWhat I need, is to add a new tag (fifth tag) called Audio Bitrate. Is it possible? If yes so, how should it be done? Thanks in advance
Add a new custom metadata tag
Exiftool has an -alldates parameter: exiftool -alldates-=24 -filemodifydate-=24 -filecreatedate-=24 *.jpgThe above code works to subtract 24 hours according to this Forum comment (by Phil Harvey): https://exiftool.org/forum/index.php?topic=6330.msg31354#msg31354 You can combine the above code with an -out file specification, like -out ./newJPG.jpg or (in a new directory), with -out ./newdir/newJPG.jpg. The -out specification gets inserted directly after the call to exiftool. You can also try adding to the -out file specification ( after making backups! ), the option-overwrite_original OR -overwrite_original_in_place, inserted directly after the call to exiftool. See exiftool --help for details. Note, an earlier revision of this post suggested using the -globalTimeShift parameter, as in: exiftool -globalTimeShift -24 -time:all *.jpgHowever (according to Phil Harvey), "The -globalTimeShift option is needed only when you want to copy a shifted date/time value to another tag.", such as a -geo tag. See: https://exiftool.org/forum/index.php?topic=9224.msg47655#msg47655 https://exiftool.org/forum/index.php?topic=6330.msg31354#msg31354 https://exiftool.org/exiftool_pod.html https://exiftool.org/
I have taken 300 photos at an event. Afterwards I noticed that the date was set incorrectly in the camera - one day off. There are lots of EXIF data in the files, not just creation dates. How can I change only the dates contained within all relevant EXIF fields to correct the date (minus one day exactly)? No other data should be changed by this modification! Perhaps for each file I could dump the data (exiftool or exiv2?), then modify the dump (with awk?), then replace EXIF data from the modified dump? But how? EDIT: There is a lot of data per file: # exiftool IMG_9040.JPG | wc 289 2218 13996Lots of it are dates: # exiftool IMG_9040.JPG | grep 2021 | grep -v File Modify Date : 2021:11:02 17:06:58 Date/Time Original : 2021:11:02 17:06:58 Create Date : 2021:11:02 17:06:58 Create Date : 2021:11:02 17:06:58.24+01:00 Date/Time Original : 2021:11:02 17:06:58.24+01:00 Modify Date : 2021:11:02 17:06:58.24+01:00I wish to change all of these.
How to batch change exif data for JPEG photo files (wrong date set in camera)?
The -= operation is remove. To add a tag, just assign it: exiftool -Exif:ImageDescription="foo" -Description="foo" "$pic" exiftool "$pic" | grep "Image Description"Remember to double-quote your variables ("$pic" in this example) to protect them from shell expansion and globbing exiftool documentation is available as part of the tool itself, or online
I need to add a tag named "Image Description" to a picture. However, nothing is changed. What am I missing? cd /tmp/wget https://i.imgur.com/jGwDTpL.jpg pic=jGwDTpL.jpgexiftool -Exif:ImageDescription-="foo" $pic exiftool -Description-="foo" $pic exiftool $pic | grep "Image Description"
Add an exif tag to a picture using exiftool
Well, let's say you are using exiftool and a command like exiftool -sep $'\t' -T -filename -createdate dirThis prints one line per image in directory dir with the filename and its creation timestamp. I don't know if this is the timestamp you had in mind but you can always change that field. Pipe output of that command to this awk command awk 'BEGIN { OFS = "\t" }{ datetime = $2 " " $3 } { files[datetime] = files[datetime] " " $1 } END { for (time in files) print time ":" files[time] }'...like so... exiftool -sep $'\t' -T -filename -createdate dir | awk 'BEGIN { OFS = "\t" }{ datetime = $2 " " $3 } { files[datetime] = files[datetime] " " $1 } END { for (time in files) print time ":" files[time] }'And you'll get output of the form 2016:05:05 00:52:03: IMG_0990.JPG IMG_0962.JPG 2016:05:05 00:51:23: IMG_0965.JPG 2016:05:05 00:48:36: IMG_0956.JPG IMG_0966.JPG IMG_0969.JPGNote: For the sake of simplicity/sanity I am assuming that the image filenames don't have spaces in them or any other funkiness. Disclaimer: I'm not an awk expert. There may be more elegant ways to do the same thing.
How can I find all groups of images which have the same exif timestamp in a given directory from the command line in linux?
Find images with same exif timestamp
The way you're doing it runs crc32 on all files at the same time, producing one string with all the checksums and filenames. That's the string you see that mv complains about. So, run crc32 inside the loop. Assuming your files are in subdirectories of the current directory (so, ./dir0001/DCIM_0000.JPG, ./dir0002/DCIM_0123.JPG or something like that) and you want to put them in destdir/: #!/bin/bash shopt -s globstar for file in **/*.JPG; do crc=$(crc32 "$file") date=$(exiftool -d "%Y%m%d-%H%M%S" -CreateDate "$file" | awk '{print $4}') basename=$(basename "$file" .JPG) # remove directory and extension newname="destdir/$basename-$date-$crc.jpg" # piece together a new name echo mv -nv "$file" "$newname" doneThe ** is a nonstandard extension that runs the glob for the whole directory tree. You could also use */*.JPG to just look for files in the immediate subdirectories. Similarly to $date, you could add another variable for the subsecond times and include that in how newname is formed: subsec=$(exiftool -d "%Y%m%d-%H%M%S" -SubSecTimeDigitized "$file" | awk '{print $4}') newname="destination/$basename-$date-$subsec-$crc.jpg"Or something like that. Check the output for your files. Remove the echo from the mv once you're satisfied that the script works correctly.
I have bunch of .JPG, .NEF and .MOV files made with my Nikon DSLR, named as DSC_0001.JPG, DSC_0002.JPG and so on. After certain number is reached, new directory is made and number count restarted, which leads to duplicates when I move them into one big directory. I figured out, using bash, how to name them after their date of creation using for loop and exiftool: for i in *.JPG do mv -nv "$i" "$(exiftool -d "%Y%m%d-%H%M%S" -CreateDate "$i" | awk '{print $4".jpg"}') donean approach that creates another problem that I became aware of too late and lost about 100 files in the process - sometimes more than one picture is taken in one second and I can't find an easy way to attach unit smaller that a second, keeping filename readable. So I wanted to append file's checksum, for example CRC32, something short and fairly random and made this monstrosity: sum="$(crc32 * | cut -d' ' -f1)"for i in *.jpg do mv -nv "$i" "$(exiftool -d %Y%m%d-%H%M%S" -CreateDate "$i" | awk '{print $4"-"}')$sum.jpg"donewhich worked with one file in directory but produced these kind of error messages when applied in main directory of all photos I have: mv: failed to access '504a5b89'$'\t''DSC_0001.NEF'$'\n''629a031e'$'\t''DSC_0002.NEF'$'\n''1af2720c'$'\t''DSC_0003.NEF'$'\n''852f62de'$'\t''DSC_0004.NEF'$'\n''874bd1f0'$'\t''DSC_0005.NEF'$'\n''f3fceda8'$'\t''DSC_0006.NEF'$'\n''28207fa2'$'\t''DSC_0007.NEF'$'\n''046ca494'$'\t''DSC_0008.NEF'$'\n''abf11428'$'\t''DSC_0009.NEF'$'\n''479e728d'$'\t''DSC_0010.NEF'$'\n''8df21237'$'\t''DSC_0011.NEF'$'\n''77663953'$'\t''DSC_0012.NEF'$'\n''7d9871c7'$'\t''DSC_0013.NEF'$'\n''85608081'$'\t''DSC_0014.NEF'$'\n''49246d62'$'\t''DSC_0016.NEF'$'\n''a2f927bc'$'\t''DSC_0017.NEF'$'\n''33277700'$'\t''DSC_0018.NEF'$'\n''76410b41'$'\t''DSC_0019.NEF'$'\n''c30d7d1c'$'\t''DSC_0020.NEF'$'\n''de239afe'$'\t''DSC_0021.NEF'$'\n''c9a1999e'$'\t''DSC_0022.NEF'$'\n''5e124a8a'$'\t''DSC_0023.NEF'$'\n''8e5eb670'$'\t''DSC_0024.NEF'$'\n''a26eb4cb'$'\t''DSC_0025.NEF'$'\n''a0f0444e'$'\t''DSC_0026.NEF'$'\n''7b686084'$'\t''DSC_0027.NEF'$'\n''eff42939'$'\t''DSC_0028.NEF'$'\n''7ed21f22'$'\t''DSC_0029.NEF'$'\n''85f8f493'$'\t''DSC_0030.NEF'$'\n''97227a4f'$'\t''DSC_0031.NEF'$'\n''a0a96fb2'$'\t''DSC_0032.NEF'$'\n''e82c47e6'$'\t''DSC_0033.NEF'$'\n''55f5e030'$'\t''DSC_0034.NEF'$'\n''4fd188b4'$'\t''DSC_0035.NEF'$'\n''357d0757'$'\t''DSC_0036.NEF'$'\n''87cd39d6'$'\t''DSC_0037.NEF'$'\n''34b6fb45'$'\t''DSC_0038.NEF'$'\n''ef6f09f0'$'\t''DSC_0039.NEF'$'\n''b8d12e00'$'\t''DSC_0040.NEF'$'\n''bd2d4cae'$'\t''DSC_0041.NEF'$'\n''55dc67c0'$'\t''DSC_0042.NEF'$'\n''08ba39e4'$'\t''DSC_0043.NEF'$'\n''bcfefc4a'$'\t''DSC_0044.NEF'$'\n''ce69e1fd'$'\t''DSC_0045.NEF'$'\n''275af151'$'\t''DSC_0046.NEF'$'\n''6a30a875'$'\t''DSC_0047.NEF'$'\n''3aefb843'$'\t''DSC_0048.NEF'$'\n''674458fb'$'\t''DSC_0049.NEF'$'\n''aab60fc7'$'\t''DSC_0050.NEF'$'\n''13502f06'$'\t''DSC_0051.NEF'$'\n''ac3eba1e'$'\t''DSC_0052.NEF'$'\n''c5052ad9'$'\t''DSC_0053.NEF'$'\n''80c3ee64'$'\t''DSC_0054.NEF'$'\n''d17f3177'$'\t''DSC_0055.NEF'$'\n''53f51ccf'$'\t''DSC_0056.NEF'$'\n''f91427af'$'\t''DSC_0057.NEF'$'\n''0d596f23'$'\t''DSC_0058.NEF'$'\n''fc378e62'$'\t''DSC_0059.NEF'$'\n''c72be5b3'$'\t''DSC_0060.NEF'$'\n''8bf29954'$'\t''DSC_0061.NEF'$'\n''f1193bbf'$'\t''DSC_0062.NEF'$'\n''d4460f24'$'\t''DSC_0063.NEF'$'\n''1e7b1c07'$'\t''DSC_0064.NEF'$'\n''3cc1cbd2'$'\t''DSC_0065.NEF'$'\n''ae935236'$'\t''DSC_0066.NEF'$'\n''f0ff02c1'$'\t''DSC_0067.NEF'$'\n''a16c6e58'$'\t''DSC_0068.NEF'$'\n''57ae8019'$'\t''DSC_0069.NEF'$'\n''82fc94df'$'\t''DSC_0070.NEF'$'\n''2ac41f26'$'\t''DSC_0071.NEF'$'\n''76b0493a'$'\t''DSC_0072.NEF'$'\n''9791ccf7'$'\t''DSC_0073.NEF'$'\n''eac3e7aa'$'\t''DSC_0074.NEF'$'\n''14f7c55c'$'\t''DSC_0075.NEF'$'\n''86df85b0'$'\t''DSC_0076.NEF'$'mv: failed to access '504a5b89'$'\t''DSC_0001.NEF'$'\n''629a031e'$'\t''DSC_0002.NEF'$'\n''1af2720c'$'\t''DSC_0003.NEF'$'\n''852f62de'$'\t''DSC_0004.NEF'$'\n''874bd1f0'$'\t''DSC_0005.NEF'$'\n''f3fceda8'$'\t''DSC_0006.NEF'$'\n''28207fa2'$'\t''DSC_0007.NEF'$'\n''046ca494'$'\t''DSC_0008.NEF'$'\n''abf11428'$'\t''DSC_0009.NEF'$'\n''479e728d'$'\t''DSC_0010.NEF'$'\n''8df21237'$'\t''DSC_0011.NEF'$'\n''77663953'$'\t''DSC_0012.NEF'$'\n''7d9871c7'$'\t''DSC_0013.NEF'$'\n''85608081'$'\t''DSC_0014.NEF'$'\n''49246d62'$'\t''DSC_0016.NEF'$'\n''a2f927bc'$'\t''DSC_0017.NEF'$'\n''33277700'$'\t''DSC_0018.NEF'$'\n''76410b41'$'\t''DSC_0019.NEF'$'\n''c30d7d1c'$'\t''DSC_0020.NEF'$'\n''de239afe'$'\t''DSC_0021.NEF'$'\n''c9a1999e'$'\t''DSC_0022.NEF'$'\n''5e124a8a'$'\t''DSC_0023.NEF'$'\n''8e5eb670'$'\t''DSC_0024.NEF'$'\n''a26eb4cb'$'\t''DSC_0025.NEF'$'\n''a0f0444e'$'\t''DSC_0026.NEF'$'\n''7b686084'$'\t''DSC_0027.NEF'$'\n''eff42939'$'\t''DSC_0028.NEF'$'\n''7ed21f22'$'\t''DSC_0029.NEF'$'\n''85f8f493'$'\t''DSC_0030.NEF'$'\n''97227a4f'$'\t''DSC_0031.NEF'$'\n''a0a96fb2'$'\t''DSC_0032.NEF'$'\n''e82c47e6'$'\t''DSC_0033.NEF'$'\n''55f5e030'$'\t''DSC_0034.NEF'$'\n''4fd188b4'$'\t''DSC_0035.NEF'$'\n''357d0757'$'\t''DSC_0036.NEF'$'\n''87cd39d6'$'\t''DSC_0037.NEF'$'\n''34b6fb45'$'\t''DSC_0038.NEF'$'\n''ef6f09f0'$'\t''DSC_0039.NEF'$'\n''b8d12e00'$'\t''DSC_0040.NEF'$'\n''bd2d4cae'$'\t''DSC_0041.NEF'$'\n''55dc67c0'$'\t''DSC_0042.NEF'$'\n''08ba39e4'$'\t''DSC_0043.NEF'$'\n''bcfefc4a'$'\t''DSC_0044.NEF'$'\n''ce69e1fd'$'\t''DSC_0045.NEF'$'\n''275af151'$'\t''DSC_0046.NEF'$'\n''6a30a875'$'\t''DSC_0047.NEF'$'\n''3aefb843'$'\t''DSC_0048.NEF'$'\n''674458fb'$'\t''DSC_0049.NEF'$'\n''aab60fc7'$'\t''DSC_0050.NEF'$'\n''13502f06'$'\t''mv: failed to access '504a5b89'$'\t''DSC_0001.NEF'$'\n''629a031e'$'\t''DSC_0002.NEF'$'\n''1af2720c'$'\t''DSC_0003.NEF'$'\n''852f62de'$'\t''DSC_0004.NEF'$'\n''874bd1f0'$'\t''DSC_0005.NEF'$'\n''f3fceda8'$'\t''DSC_0006.NEF'$'\n''28207fa2'$'\t''DSC_0007.NEF'$'\n''046ca494'$'\t''DSC_0008.NEF'$'\n''abf11428'$'\t''DSC_0009.NEF'$'\n''479e728d'$'\t''DSC_0010.NEF'$'\n''8df21237'$'\t''DSC_0011.NEF'$'\n''77663953'$'\t''DSC_0012.NEF'$'\n''7d9871c7'$'\t''DSC_0013.NEF'$'\n''85608081'$'\t''DSC_0014.NEF'$'\n''49246d62'$'\t''DSC_0016.NEF'$'\n''a2f927bc'$'\t''DSC_0017.NEF'$'\n''33277700'$'\t''DSC_0018.NEF'$'\n''76410b41'$'\t''DSC_0019.NEF'$'\n''c30d7d1c'$'\t''DSC_0020.NEF'$'\n''de239afe'$'\t''DSC_0021.NEF'$'\n''c9a1999e'$'\t''DSC_0022.NEF'$'\n''5e124a8a'$'\t''DSC_0023.NEF'$'\n''8e5eb670'$'\t''DSC_0024.NEF'$'\n''a26eb4cb'$'\t''DSC_0025.NEF'$'\n''a0f0444e'$'\t''DSC_0026.NEF'$'\n''7b686084'$'\t''DSC_0027.NEF'$'\n''eff42939'$'\t''DSC_0028.NEF'$'\n''7ed21f22'$'\t''DSC_0029.NEF'$'\n''85f8f493'$'\t''DSC_0030.NEF'$'\n''97227a4f'$'\t''DSC_0031.NEF'$'\n''a0a96fb2'$'\t''DSC_0032.NEF'$'\n''e82c47e6'$'\t''DSC_0033.NEF'$'\n''55f5e030'$'\t''DSC_0034.NEF'$'\n''4fd188b4'$'\t''DSC_0035.NEF'$'\n''357d0757'$'\t''DSC_0036.NEF'$'\n''87cd39d6'$'\t''DSC_0037.NEF'$'\n''34b6fb45'$'\t''DSC_0038.NEF'$'\n''ef6f09f0'$'\t''DSC_0039.NEF'$'\n''b8d12e00'$'\t''DSC_0040.NEF'$'\n''bd2d4cae'$'\t''DSC_0041.NEF'$'\n''55dc67c0'$'\t''DSC_0042.NEF'$'\n''08ba39e4'$'\t''DSC_0043.NEF'$'\n''bcfefc4a'$'\t''DSC_0044.NEF'$'\n''ce69e1fd'$'\t''DSC_0045.NEF'$'\n''275af151'$'\t''DSC_0046.NEF'$'\n''6a30a875'$'\t''DSC_0047.NEF'$'\n''3aefb843'$'\t''DSC_0048.NEF'$'\n''674458fb'$'\t''DSC_0049.NEF'$'\n''aab60fc7'$'\t''DSC_0050.NEF'$'\n''13502f06'$'\t''DSC_0051.NEF'$'\n''ac3eba1e'$'\t''DSC_0052.NEF'$'\n''c5052ad9'$'\t''DSC_0053.NEF'$'\n''80c3ee64'$'\t''DSC_0054.NEF'$'\n''d17f3177'$'\t''DSC_0055.NEF'$'\n''53f51ccf'$'\t''DSC_0056.NEF'$'\n''f91427af'$'\t''DSC_0057.NEF'$'\n''0d596f23'$'\t''DSC_0058.NEF'$'\n''fc378e62'$'\t''DSC_0059.NEF'$'\n''c72be5b3'$'\t''DSC_0060.NEF'$'\n''8bf29954'$'\t''DSC_0061.NEF'$'\n''f1193bbf'$'\t''DSC_0062.NEF'$'\n''d4460f24'$'\t''DSC_0063.NEF'$'\n''1e7b1c07'$'\t''DSC_0064.NEF'$'\n''3cc1cbd2'$'\t''DSC_0065.NEF'$'\n''ae935236'$'\t''DSC_0066.NEF'$'\n''f0ff02c1'$'\t''DSC_0067.NEF'$'\n''a16c6e58'$'\t''DSC_0068.NEF'$'\n''57ae8019'$'\t''DSC_0069.NEF'$'\n''82fc94df'$'\t''DSC_0070.NEF'$'\n''2ac41f26'$'\t''DSC_0071.NEF'$'\n''76b0493a'$'\t''DSC_0072.NEF'$'\n''9791ccf7'$'\t''DSC_0073.NEF'$'\n''eac3e7aa'$'\t''DSC_0074.NEF'$'\n''14f7c55c'$'\t''DSC_0075.NEF'$'\n''86df85b0'$'\t''DSC_0076.NEF'$'\n''d23ebeb8'$'\t''DSC_0077.NEF'$'\n''b1f51ea1'$'\t''DSC_0078.NEF'$'\n''1fb307bc'$'\t''DSC_0079.NEF'$'\n''91c17294'$'\t''DSC_0080.NEF'$'\n''c590cfb0'$'\t''DSC_0081.NEF'$'\n''9fc1eaad'$'\t''DSC_0082.NEF'$'\n''31de2e7c'$'\t''DSC_0083.NEF'$'\n''b4858068'$'\t''DSC_0084.NEF'$'\n''04371839'$'\t''DSC_0085.NEF'$'\n''fc440b4a'$'\t''DSC_0086.NEF'$'\n''9de00d44'$'\t''DSC_0087.NEF'$'\n''b9ab2214'$'\t''DSC_0088.NEF'$'\n''4c6f37c8'$'\t''DSC_0089.NEF'$'\n''14de5216'$'\t''DSC_0090.NEF'$'\n''8a565c42'$'\t''DSC_0091.NEF'$'\n''d05282d6'$'\t''DSC_0092.NEF'$'\n''fc032016'$'\t''DSC_0093.NEF'$'\n''ada77bc0'$'\t''DSC_0094.NEF'$'\n''3e6e288c'$'\t''DSC_0095.NEF'$'\n''6bfdb74a'$'\t''DSC_0096.NEF'$'\n''f2529938'$'\t''DSC_0097.NEF'$'\n''8193fcd9'$'\t''DSC_0098.NEF'$'\n''7786e3e1'$'\t''DSC_0099.NEF'$'\n''f2c36981'$'\t''DSC_0100.NEF'$'\n''b0e548e9'$'\t''DSC_0101.NEF'$'\n''b222e465'$'\t''DSC_0102.NEF'$'\n''b32683ac'$'\t''DSC_0103.NEF'$'\n''8511325d'$'\t''DSC_0104.NEF'$'\n''6ae62bf8'$'\t''DSC_0105.NEF'$'\n''bc15a457'$'\t''DSC_0106.NEF'$'\n''31f00e91'$'\t''DSC_0107.NEF'$'\n''355b0664'$'\t''DSC_0108.NEF'$'\n''201e3b02'$'\t''DSC_0109.NEF'$'\n''09456d16'$'\t''DSC_0110.NEF'$'\n''1bfa57da'$'\t''DSC_0111.NEF'$'\n''7171b5b8'$'\t''DSC_0112.NEF'$'\n''6c29ae1a'$'\t''DSC_0113.NEF'$'\n''92861cfd'$'\t''DSC_0114.NEF'$'\n''ed24a0a5'$'\t''DSC_0115.NEF'$'\n''2583d832'$'\t''DSC_0116.NEF'$'\n''6a45e5c5'$'\t''DSC_0117.NEF.mov': File name too long DSC_0051.NEF'$'\n''ac3eba1e'$'\t''DSC_0052.NEF'$'\n''c5052ad9'$'\t''DSC_0053.NEF'$'\n''80c3ee64'$'\t''DSC_0054.NEF'$'\n''d17f3177'$'\t''DSC_0055.NEF'$'\n''53f51ccf'$'\t''DSC_0056.NEF'$'\n''f91427af'$'\t''DSC_0057.NEF'$'\n''0d596f23'$'\t''DSC_0058.NEF'$'\n''fc378e62'$'\t''DSC_0059.NEF'$'\n''c72be5b3'$'\t''DSC_0060.NEF'$'\n''8bf29954'$'\t''DSC_0061.NEF'$'\n''f1193bbf'$'\t''DSC_0062.NEF'$'\n''d4460f24'$'\t''DSC_0063.NEF'$'\n''1e7b1c07'$'\t''DSC_0064.NEF'$'\n''3cc1cbd2'$'\t''DSC_0065.NEF'$'\n''ae935236'$'\t''DSC_0066.NEF'$'\n''f0ff02c1'$'\t''DSC_0067.NEF'$'\n''a16c6e58'$'\t''DSC_0068.NEF'$'\n''57ae8019'$'\t''DSC_0069.NEF'$'\n''82fc94df'$'\t''DSC_0070.NEF'$'\n''2ac41f26'$'\t''DSC_0071.NEF'$'\n''76b0493a'$'\t''DSC_0072.NEF'$'\n''9791ccf7'$'\t''DSC_0073.NEF'$'\n''eac3e7aa'$'\t''DSC_0074.NEF'$'\n''14f7c55c'$'\t''DSC_0075.NEF'$'\n''86df85b0'$'\t''DSC_0076.NEF'$'\n''d23ebeb8'$'\t''DSC_0077.NEF'$'\n''b1f51ea1'$'\t''DSC_0078.NEF'$'\n''1fb307bc'$'\t''DSC_0079.NEF'$'\n''91c17294'$'\t''DSC_0080.NEF'$'\n''c590cfb0'$'\t''DSC_0081.NEF'$'\n''9fc1eaad'$'\t''DSC_0082.NEF'$'\n''31de2e7c'$'\t''DSC_0083.NEF'$'\n''b4858068'$'\t''DSC_0084.NEF'$'\n''04371839'$'\t''DSC_0085.NEF'$'\n''fc440b4a'$'\t''DSC_0086.NEF'$'\n''9de00d44'$'\t''DSC_0087.NEF'$'\n''b9ab2214'$'\t''DSC_0088.NEF'$'\n''4c6f37c8'$'\t''DSC_0089.NEF'$'\n''14de5216'$'\t''DSC_0090.NEF'$'\n''8a565c42'$'\t''DSC_0091.NEF'$'\n''d05282d6'$'\t''DSC_0092.NEF'$'\n''fc032016'$'\t''DSC_0093.NEF'$'\n''ada77bc0'$'\t''DSC_0094.NEF'$'\n''3e6e288c'$'\t''DSC_0095.NEF'$'\n''6bfdb74a'$'\t''DSC_0096.NEF'$'\n''f2529938'$'\t''DSC_0097.NEF'$'\n''8193fcd9'$'\t''DSC_0098.NEF'$'\n''7786e3e1'$'\t''DSC_0099.NEF'$'\n''f2c36981'$'\t''DSC_0100.NEF'$'\n''b0e548e9'$'\t''DSC_0101.NEF'$'\n''b222e465'$'\t''DSC_0102.NEF'$'\n''b32683ac'$'\t''DSC_0103.NEF'$'\n''8511325d'$'\t''DSC_0104.NEF'$'\n''6ae62bf8'$'\t''DSC_0105.NEF'$'\n''bc15a457'$'\t''DSC_0106.NEF'$'\n''31f00e91'$'\t''DSC_0107.NEF'$'\n''355b0664'$'\t''DSC_0108.NEF'$'\n''201e3b02'$'\t''DSC_0109.NEF'$'\n''09456d16'$'\t''DSC_0110.NEF'$'\n''1bfa57da'$'\t''DSC_0111.NEF'$'\n''7171b5b8'$'\t''DSC_0112.NEF'$'\n''6c29ae1a'$'\t''DSC_0113.NEF'$'\n''92861cfd'$'\t''DSC_0114.NEF'$'\n''ed24a0a5'$'\t''DSC_0115.NEF'$'\n''2583d832'$'\t''DSC_0116.NEF'$'\n''6a45e5c5'$'\t''DSC_0117.NEF.mov': File name too long \n''d23ebeb8'$'\t''DSC_0077.NEF'$'\n''b1f51ea1'$'\t''DSC_0078.NEF'$'\n''1fb307bc'$'\t''DSC_0079.NEF'$'\n''91c17294'$'\t''DSC_0080.NEF'$'\n''c590cfb0'$'\t''DSC_0081.NEF'$'\n''9fc1eaad'$'\t''DSC_0082.NEF'$'\n''31de2e7c'$'\t''DSC_0083.NEF'$'\n''b4858068'$'\t''DSC_0084.NEF'$'\n''04371839'$'\t''DSC_0085.NEF'$'\n''fc440b4a'$'\t''DSC_0086.NEF'$'\n''9de00d44'$'\t''DSC_0087.NEF'$'\n''b9ab2214'$'\t''DSC_0088.NEF'$'\n''4c6f37c8'$'\t''DSC_0089.NEF'$'\n''14de5216'$'\t''DSC_0090.NEF'$'\n''8a565c42'$'\t''DSC_0091.NEF'$'\n''d05282d6'$'\t''DSC_0092.NEF'$'\n''fc032016'$'\t''DSC_0093.NEF'$'\n''ada77bc0'$'\t''DSC_0094.NEF'$'\n''3e6e288c'$'\t''DSC_0095.NEF'$'\n''6bfdb74a'$'\t''DSC_0096.NEF'$'\n''f2529938'$'\t''DSC_0097.NEF'$'\n''8193fcd9'$'\t''DSC_0098.NEF'$'\n''7786e3e1'$'\t''DSC_0099.NEF'$'\n''f2c36981'$'\t''DSC_0100.NEF'$'\n''b0e548e9'$'\t''DSC_0101.NEF'$'\n''b222e465'$'\t''DSC_0102.NEF'$'\n''b32683ac'$'\t''DSC_0103.NEF'$'\n''8511325d'$'\t''DSC_0104.NEF'$'\n''6ae62bf8'$'\t''DSC_0105.NEF'$'\n''bc15a457'$'\t''DSC_0106.NEF'$'\n''31f00e91'$'\t''DSC_0107.NEF'$'\n''355b0664'$'\t''DSC_0108.NEF'$'\n''201e3b02'$'\t''DSC_0109.NEF'$'\n''09456d16'$'\t''DSC_0110.NEF'$'\n''1bfa57da'$'\t''DSC_0111.NEF'$'\n''7171b5b8'$'\t''DSC_0112.NEF'$'\n''6c29ae1a'$'\t''DSC_0113.NEF'$'\n''92861cfd'$'\t''DSC_0114.NEF'$'\n''ed24a0a5'$'\t''DSC_0115.NEF'$'\n''2583d832'$'\t''DSC_0116.NEF'$'\n''6a45e5c5'$'\t''DSC_0117.NEF.mov': File name too longI want all files to be stored in one directory with a unique name. Any improvement, elegant solution or alternative is welcome!
How to rename a file to its creation date and hash value combined in order to achieve unique filename?
Using ExifTool you could just run: exiftool -TagsFromFile file.jpg '-Keywords>Description' file.jpgYou can find more info in the manpage for exiftool.
I have over 80,000 photos that have been given proper EXIF keywords but Google Drive requires data in the Description for it to be searchable in their online app. I need to copy the contents of the Keywords entry to the description on this mass of photos that goes deep into sub directories.
Copy EXIF meta data from the Keywords to the Description on a huge amount of photos in sub folders
Try: exiftool '-CreateDate<${FileName;use Date::Manip; Date_Init("DateFormat=non-US"); /on (.*at.*?)(?: #\d+)?\.jpg$/;$_=$1; y/./:/;$_=UnixDate($_,"%Y-%m-%d %T") }' ./*on\ *at*.jpg(you may have to install the Date::Manip perl module). The -Tag<value sets the corresponding tag. The ${tag;perl-code} can be used to expand to the value of tag after it has been processed by the perl-code. Here, the plan is to use Date::Manip's UnixDate function to parse the date in the filename and convert it to a format acceptable for the CreateDate tag (2011-04-15 21:38:00). Date::Manip understands a lot of common date formats. For instance, it understands 3-09-12 at 9:24 PM (though you have to tell it whether it's the US or non-US convention where the day or month is first) and 2010-09-15 at 18.44 (note the : instead of .). So what we do is extract that part from the filename, convert the . to : and pass it to UnixDate.
There is a set of photos with timestamps in their filenames like these: Photo on 3-09-12 at 9.24 PM #2.jpgPhoto on 3-09-12 at 9.24 PM #1.jpgPhoto on 3-09-12 at 8.23 PM.jpgetc. ("3-09-12", means "3rd Sep 2012" or DD-MM-YY) But these photos have no EXIF data at all. Before you imported them to a larger collection, how would you pipe this information to exiftool and also tell it to add new timestamps as EXIF data, all from the photos' filenames? Update: (The now Pt. 1 of) my question about parsing filenames with DD-MM-YY and 12 hour time has been very kindly answered by @Stephane. But I discovered that the same batch of photos contains filenames with one variation I had missed. I hope it makes more sense to add a 'Pt. 2' instead of starting a whole new question. In short: How could I change @Stephane's brilliant answer -- exiftool '-CreateDate<${FileName;use Date::Manip; Date_Init("DateFormat=non-US"); /on (.*?at.*?[AP]M)/;$_=$1; y/./:/;$_=UnixDate($_,"%Y-%m-%d %T") }' ./*on\ *at*[PA]M*.jpg-- so that it might work with filename format below, which is slightly different from the first part of the question, in that it uses YYYY-MM-DD and 24-hour time: Photo on 2010-09-15 at 18.44 #4.jpgPhoto on 2010-09-15 at 18.44 #3.jpgPhoto on 2010-09-15 at 18.44.jpgMore explanation: Trying to edit the Date::Manip part of the Stephane's script seems to show up my ignorance of what's going on in the most important parts of it. I tried omitting the Date_Init line since we are back to an ISO-esque full-year date format and then having /on (.*?at.*?)/;$_=$1; y/./:/;$_=UnixDate($_,"%Y-%m-%d %T") ' ./*on\ *at*.jpgBut exiftool is giving me no writeable tags and FileName not defined. Instructions at http://search.cpan.org/~sbeck/Date-Manip-5.56/lib/Date/Manip.pod don't seem to be helpful (at least to me) in understanding what's going on with those periods, that 'y' at the start of the line, semicolon etc, and they're rather ungoogleable :S
Using EXIFTool to add EXIF data from filenames
You either post-process the output with a tool like sed etc exiv2 -g Exif.Image.Artist -Pv ./*.webp | sed 's/.*\.webp[[:blank:]]*//'or use a loop to pass a single file at a time: for f in ./*.webp; do exiv2 -g Exif.Image.Artist -Pv "$f"; doneor use exiftool e.g. exiftool -q -p '$Exif:Artist' ./*.webp
I'm using exiv2 0.27.2. I want to print the tag values of multiple webp files, but without the filename being printed. With the following command: exiv2 -g Exif.Image.Artist -Pv *.webpI get the following output: 3q2NIGNI_o.webp tomato 3qAwrJWu_o.webp orange 3qDZg9vz_o.webp cantelopeI just want the tag name output, without the filename, like so: tomato orange cantelope
Exiv2: How to print tag values without printing the corresponding filenames
What you're looking for is for mediainfo to support EXIF metadata (which is not one of the listed proposals on that page as far as I can tell). You should suggest it to them if you want the feature.
I have been scratching my head on this for quite some time now. Let's see what mediainfo says on an image - $ mediainfo ZQs3vcsHiGY.jpg General Complete name : ZQs3vcsHiGY.jpg Format : JPEG File size : 895 KiBImage Format : JPEG Width : 2 500 pixels Height : 1 576 pixels Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Compression mode : Lossy Stream size : 895 KiB (100%) ColorSpace_ICC : RGBNow let's see the same image in exiftool - $ exiftool ZQs3vcsHiGY.jpg ExifTool Version Number : 10.80 File Name : ZQs3vcsHiGY.jpg Directory : . File Size : 895 kB File Modification Date/Time : 2018:04:12 10:40:35+05:30 File Access Date/Time : 2018:04:15 12:43:28+05:30 File Inode Change Date/Time : 2018:04:12 10:40:35+05:30 File Permissions : rw-r--r-- File Type : JPEG File Type Extension : jpg MIME Type : image/jpeg JFIF Version : 1.01 Resolution Unit : inches X Resolution : 72 Y Resolution : 72 XMP Toolkit : XMP Core 4.4.0-Exiv2 Source URL : https://unsplash.com/photos/ZQs3vcsHiGY Source Type : unsplash Author : rawpixel.com Source Name : Unsplash.com Sfw Rating : 100 Image URL : https://unsplash.com/photos/ZQs3vcsHiGY/download Author URL : https://unsplash.com/@rawpixel Source Location : https://unsplash.com Creator : rawpixel.com Profile CMM Type : Linotronic Profile Version : 2.1.0 Profile Class : Display Device Profile Color Space Data : RGB Profile Connection Space : XYZ Profile Date Time : 1998:02:09 06:49:00 Profile File Signature : acsp Primary Platform : Microsoft Corporation CMM Flags : Not Embedded, Independent Device Manufacturer : Hewlett-Packard Device Model : sRGB Device Attributes : Reflective, Glossy, Positive, Color Rendering Intent : Perceptual Connection Space Illuminant : 0.9642 1 0.82491 Profile Creator : Hewlett-Packard Profile ID : 0 Profile Copyright : Copyright (c) 1998 Hewlett-Packard Company Profile Description : sRGB IEC61966-2.1 Media White Point : 0.95045 1 1.08905 Media Black Point : 0 0 0 Red Matrix Column : 0.43607 0.22249 0.01392 Green Matrix Column : 0.38515 0.71687 0.09708 Blue Matrix Column : 0.14307 0.06061 0.7141 Device Mfg Desc : IEC http://www.iec.ch Device Model Desc : IEC 61966-2.1 Default RGB colour space - sRGB Viewing Cond Desc : Reference Viewing Condition in IEC61966-2.1 Viewing Cond Illuminant : 19.6445 20.3718 16.8089 Viewing Cond Surround : 3.92889 4.07439 3.36179 Viewing Cond Illuminant Type : D50 Luminance : 76.03647 80 87.12462 Measurement Observer : CIE 1931 Measurement Backing : 0 0 0 Measurement Geometry : Unknown Measurement Flare : 0.999% Measurement Illuminant : D65 Technology : Cathode Ray Tube Display Red Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract) Green Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract) Blue Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract) Image Width : 2500 Image Height : 1576 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 2500x1576 Megapixels : 3.9I went to mediainfo upstream and saw this page but was unable to comprehend which feature to vote for so it will give more metadata info. for images. I did also scour the local manpage as well as --help but was not able to get far. I do know that mediainfo is suited more for video and audio files rather than images but would be nice if one tool could do it all. I am running exiftool 10.80-1 and $ mediainfo --version MediaInfo Command line, MediaInfoLib - v18.03on Debian testing.
why does mediainfo not give metadata of an image like exiftool does?
This will result in a shorter version: exiftool -coordFormat '%.4f' '-filename<${gpslatitude;} ${gpslongitude} ${datetimeoriginal}_$filename' -d "%Y-%m-%d_%H.%M.%S%%-c.%%e" *.JPGBut it still adds the compass point N,E,S or W If you want to add the city, this could be added with a loop, using the nominatim API: #!/bin/bash #exiftool '-filename<${datetimeoriginal}_$filename' -d "%Y-%m-%d_%H.%M.%S%%-c.%%e" *.JPG for f in *.JPG; do echo "$f" LAT="$(exiftool -coordFormat '%.4f' "$f"|egrep 'Latitude\s+:'|cut -d\ -f 23)" if [ "$LAT" == "" ]; then echo 'no geo coordinates'; else LON="$(exiftool -coordFormat '%.4f' "$f"|egrep 'Longitude\s+:'|cut -d\ -f 22)" URL='http://nominatim.openstreetmap.org/reverse?format=xml&lat='$LAT'&lon='$LON'&zoom=18&addressdetails=1' RES="$(curl -s "$URL"|egrep "<(city|village|town|ruins|state_district|country)")" LOC="$(echo "$RES"|grep '<city>'|sed 's/^.*<city>//g'|sed 's/<\/city>.*$//g')" if [ "$LOC" == "" ]; then LOC="$(echo "$RES"|grep '<city_district>'|sed 's/^.*<city_district>//g'|sed 's/<\/city_district>.*$//g')" fi if [ "$LOC" == "" ]; then LOC="$(echo "$RES"|grep '<village>'|sed 's/^.*<village>//g'|sed 's/<\/village>.*$//g')" fi if [ "$LOC" == "" ]; then LOC="$(echo "$RES"|grep '<town>'|sed 's/^.*<town>//g'|sed 's/<\/town>.*$//g')" fi if [ "$LOC" == "" ]; then LOC="$(echo "$RES"|grep '<ruins>'|sed 's/^.*<ruins>//g'|sed 's/<\/ruins>.*$//g')" fi if [ "$LOC" == "" ]; then LOC="$(echo "$RES"|grep '<state_district>'|sed 's/^.*<state_district>//g'|sed 's/<\/state_district>.*$//g')" fi if [ "$LOC" == "" ]; then LOC="$(echo "$RES"|grep '<country>'|sed 's/^.*<country>//g'|sed 's/<\/country>.*$//g')" fi if [ "$LOC" == "" ]; then echo "no city found at $URL"; else BASE="${f%.*}" mv -v "$f" "$BASE-$LOC.JPG" fi fi doneWhen you are done, you can count your images by location with ls -1|cut -d- -f 4|sort|uniq -c|sort -n
This is how you can rename all Jpegs in a folder by geolocation and date: exiftool '-filename<${gpslatitude;} ${gpslongitude} ${datetimeoriginal}' -d "%Y-%m-%d %H.%M.%S%%-c.%%e" *.JPGthis results in very long filenames like 53 33 36.95000000 N 9 58 29.37000000 E 2015-11-04 19.22.49.JPGHow can I use the short locations instead? So it would result in 53.560308 9.975458 2015-11-04 19.22.49.JPGOr even better, Is it possible to get and add the City of the geolocation and add it to the name?
use `exivtool` to rename Photos by location
A lot will vary depending on the images you have and the meta-data they hold, but for example with ImageMagick you can imprint the EXIF date and time from myphoto.jpg with magick myphoto.jpg -fill black -undercolor white -pointsize 96 -gravity southeast \ -annotate 0 ' %[exif:datetime] ' output.jpgIf you don't have the magick command, convert will work just as well. If you want a non-ISO date format, you can extract the date first with identify and manipulate it with awk or other tools, e.g. identify -format '%[exif:datetime]' myphoto.jpg | awk -F'[ :]' '{print $3":"$2":"$1 " " $4":"$5}'would change 2023:10:04 09:29:24 to 04:10:2023 09:29. If your image does not have EXIF data, you might be able to find other time information, e.g. '%[date:create]'. Use identify -verbose on the file to list all the properties. To calculate a suitable pointsize for the annotation you can similarly use identify to find the width of the image: identify -format '%w' myphoto.jpgSee imagemagick for many examples and alternatives. Here's a small shell script: #!/bin/bash let ps=$(identify -format '%w' input.jpg)/20 # 2023:10:04 09:29:24 datetime=$(identify -format %[exif:datetime] input.jpg) if [ -z "$datetime" ] then # date:create: 2023-11-11T14:40:21+00:00 datetime=$(identify -format %[date:create] input.jpg) fi datetime=$(echo "$datetime" | gawk -F'[- :T+]' '{print $3":"$2":"$1 " " $4":"$5}' ) magick input.jpg -fill black -undercolor white -pointsize "$ps" -gravity southeast \ -annotate 0 "$datetime" output.jpg
Imagemagick 6.9.11-60 on Debian. How to print date & time the photo was taken on the image? (on existing images). I have set this option in camera settings, but it only applies to new photos. The date is on the image medatada. It should be actual date the photo was taken, not the date it was saved on PC harddrive.
Imagemagick, how to print date the photo was taken on the image
First you need to define your XMP tag (a complete example here) $ cat config.cfg %Image::ExifTool::UserDefined = ( 'Image::ExifTool::XMP::pdfx' => { PdfSubTitle => { Writable => 'string', }, }, ); 1; # endThen with following command the tag and it's value will be added: exiftool -config config.cfg -PdfSubTitle="Sub Title" test.pdfConfirm: $ exiftool -PdfSubTitle test.pdf Pdf Sub Title : Sub Title $ exiftool test.pdf | grep 'Pdf Sub Title' Pdf Sub Title : Sub Title
There is no subtitle meta tag for pdf in exiftool. Therefore I want to add new exif tag for pdf files, it's name must be PdfSubTitle. To do this, exiftool has a guide page. But I don't understand because my knowledge of perl and exif not enough. Also there is no example for pdf files in the guide. How can I do that? When everything goes right it should be like: $ exiftool -config exif.config -PdfSubTitle="Sub Title" file.pdf
how to i create new exiftool tag for pdf files as using exiftool config file on gnu/linux?
exiv2 rm it is available in many platform. Exiv2 is a C++ library and a command line utility to manage image metadata.
Possible Duplicate: Batch delete exif info How can I remove all tags from images under a directory (using Linux)? I can find all files with something like find pictures -type f -iname "*jpg" -exec FOO "{}" \; but what should FOO be?
Removing all tags from images [duplicate]
I did not manage to solve the problem by ridgy's answer. I managed to solve it in the end by LaTeX in the thread How to rotate image 90 if height overful? The case where both picture dimensions are bigger than page size is unsolved in the thread. exiftool is about one file. To have pictures relationally nice on the page, you need LaTeX. The tools discussed here are not sufficient but the handling of the page orientation of all pictures is needed. So the question is flawed itself, I think, and cannot be handled only by exif data.
I am trying to have some stability in image orientations but they differ in Debian image viewer/LaTeX and with image viewers. I do but it does not have an effect on the orientation of wrongly positioned images; manually adjusting it with -Orientation=[1234] does not help exiftool -Orientation=1 -n *.jpgFig. 1 Output where the same image is opened in image viewer (Shotwell, ...) and Debian Space review (same output in LaTeX) I thought first that the image orientation was the mistake but it is not because doing convert masi.jpg -rotate 90 masi-rotated.jpg keeps also the relative difference the same. Exif info Wrongly positioned image, having the 90 degree or its multiples in orientation $ exiftool 28.jpg ExifTool Version Number : 9.74 File Name : 28.jpg Directory : . File Size : 69 kB File Modification Date/Time : 2016:11:29 11:59:08+02:00 File Access Date/Time : 2016:11:29 12:07:17+02:00 File Inode Change Date/Time : 2016:11:29 12:06:29+02:00 File Permissions : rw-r--r-- File Type : JPEG MIME Type : image/jpeg JFIF Version : 1.01 Resolution Unit : None X Resolution : 1 Y Resolution : 1 Exif Byte Order : Little-endian (Intel, II) Orientation : Rotate 270 CW Software : Shotwell 0.20.1 Color Space : sRGB Exif Image Width : 425 Exif Image Height : 707 XMP Toolkit : XMP Core 4.4.0-Exiv2 Image Width : 425 Image Height : 707 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 425x707Correctly (as expected) positioned image in both views $ exiftool 27.jpg ExifTool Version Number : 9.74 File Name : 27.jpg Directory : . File Size : 66 kB File Modification Date/Time : 2016:11:29 11:58:53+02:00 File Access Date/Time : 2016:11:29 12:13:36+02:00 File Inode Change Date/Time : 2016:11:29 12:07:46+02:00 File Permissions : rw-r--r-- File Type : JPEG MIME Type : image/jpeg JFIF Version : 1.01 Resolution Unit : None X Resolution : 1 Y Resolution : 1 Exif Byte Order : Little-endian (Intel, II) Orientation : Horizontal (normal) Software : Shotwell 0.20.1 Color Space : sRGB Exif Image Width : 842 Exif Image Height : 504 XMP Toolkit : XMP Core 4.4.0-Exiv2 Image Width : 842 Image Height : 504 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 842x504Debian: 8.5 Gnome: 3.14
Why exif Orientation does not force image horizontal? [duplicate]
I've taken your requirement and written a fresh script. You may want to customise the set -- . so that the dot is replaced by the default path to the images. Or you can provide the images directory on the command line #!/usr/bin/env bash # export PATH=/opt/bin:$PATH # Prefer commands from Entware[[ $# -eq 0 ]] && set -- . # Replace . with default path to source of images unwanted='OLYMPUS DIGITAL CAMERA' # phrase to match and removefind "$@" -type f \( -iname '*.jpg' -o -iname '*.jpeg' \) -print | while IFS= read -r file do # Get list of fields containing the unwanted phrase fields=($(exiftool -s "$file" | awk -v x="$unwanted" '$0 ~ x {print $1}')) # Skip files with no issues [[ ${#fields[@]} -eq 0 ]] && continue # Convert fields to exiftool parameters exifargs=() for field in "${fields[@]}" do exifargs+=("-$field=") done # Apply the conversion echo "Removing ${fields[@]} from: $file" echo exiftool -overwrite_original "${exifargs[@]}" "$file" done# Done exit 0Remove the echo from echo exiftool when you're happy it's going to do what you want. We're use bash arrays here. For example fields is an array containing the list of EXIF key names that contain the text OLYMPUS DIGITAL CAMERA, and exifargs is the corresponding list of arguments to exiftool that remove the field values. We could have generated exifargs directly with awk but it seemed easier to show what was going on with the two steps.
I've been coming up with a script to remove any EXIF/IPTC/XML meta data from a JPEG file that equals 'OLYMPUS DIGITAL CAMERA'. For those who aren't aware, Olympus cameras set this attribute in all photos with no option to turn it off. Worse still, although I have set up a Lightroom preset to remove it on import, or while editing existing images, there seems to be a bug in recent LR releases that still embed the attribute in the ImageDescription, Caption-Abstract and Description when exporting an image to JPEG. So like many Olympus users, I want it banished for good and I wrote a very simple bash script to do so. It's designed primarily to run on my QNAP NAS but could easily be modified to work in different environments. It searches for any instance of "OLYMPUS DIGITAL CAMERA" in the output of exiftool on a particular image and then deletes that attiobute. #!/usr/bin/env bash IFS=$'\n'; ## Handle spaces in file paths/names directory="/share/CE_CACHEDEV1_DATA/homes/admin/Images/Final Albums/" exiftool="/share/CE_CACHEDEV1_DATA/.qpkg/Entware/bin/exiftool" find="/opt/bin/find"for f in $($find "$directory" -type f -iname '*.jp*g'); do #echo "$f" for field in $($exiftool -s "$f" | grep "OLYMPUS DIGITAL CAMERA" | awk -F: '{ print $1 }' | sed 's/ *$//g'); do echo "Removing $field on $f" $exiftool -overwrite_original -"$field"= "$f" done doneThe only problem with this is that it's quite slow. Any call to exiftool seems to take 0.5s and so I wanted to improve efficiency by removing all attributes in one go, rather than looping round each matching attribute and removing them one by one. So this is version 2 of the script. #!/usr/bin/env bash IFS=$'\n'; ## Handle spaces in file paths/names directory="/share/CE_CACHEDEV1_DATA/homes/admin/Images/Final Albums/" exiftool="/share/CE_CACHEDEV1_DATA/.qpkg/Entware/bin/exiftool" find="/opt/bin/find" for f in $($find "$directory" -type f -iname '*.jp*g');do #echo "$f" fieldstring='' for field in $($exiftool -s "$f" | grep "OLYMPUS DIGITAL CAMERA" | awk -F: '{ print $1 }' | sed 's/ *$//g'); do fieldstring="${fieldstring}-$field= " done echo $fieldstring $exiftool -overwrite_original $fieldstring $f doneThe problem is that it only appears to remove one attribute at a time. The output of $fieldstring is: -ImageDescription= -Caption-Abstract= -Description=But I've also tried surrounding the tags to remove with single quotation marks and double, neither helped. I thought perhaps that's a limitation of eximtool. But I wrote another script which simply wipes the 3 main attributes (ImageDescription, Caption-Abstract and Description) without any testing for what they contain and that works fine! #!/usr/bin/env bash IFS=$'\n'; ## Handle spaces in file paths/names directory="/share/CE_CACHEDEV1_DATA/homes/admin/Images/Final Albums/" exiftool="/share/CE_CACHEDEV1_DATA/.qpkg/Entware/bin/exiftool" find="/opt/bin/find" for f in $($find "$directory" -type f -iname '*.jp*g'); do echo "$f" $exiftool -overwrite_original -"Description"= -"Caption-Abstract"= -"ImageDescription"= "$f" doneSo I'm fairly sure this is one of those stupid, right-in-front-of-your-nose mistakes I've made but after 2 hours of trying to figure it out, I'm at a loss. Can anyone spot a stupid mistake? I've output $fieldstring and it looks OK to me so I think it's a bash script thing that I'm missing, hence posting here! Many thanks!
Removing multiple attributes simultaneously via exiftool
exiftool -DocumentID="uuid:$newID" example.pdfsee examples on man exiftool
Hello I am writing a bash script, this script must give ID to pdf file. How do I solve the this, is there a way? Example as using bash shell. Document ID is: $ exiftool example.pdf | grep 'Document ID' Document ID : uuid:d037451d-240e-4d82-ba6d-92390b1d2962For example: $ pdftool --setDocID "newID" example.pdf
How to I set document ID or ID to PDF file via terminal?
"File types" on a Unix system are things like regular files, directories, named pipes, character special files, symbolic links etc. These are the type of files that find can filter on with its -type option. The find utility can not by itself distinguish between a "shell script", "JPEG image file" or any other type of regular file. These types of data may however be distinguished by the file utility, which looks at particular signatures within the files themselves to determine type of the file contents. A common way to label the different types of data files is by their MIME type, and file is able to determine the MIME type of a file.Using file with find to detect the MIME type of regular files, and use that to only find shell scripts: find . -type f -exec sh -c ' case $( file -bi "$1" ) in (*/x-shellscript*) exit 0; esac exit 1' sh {} \; -printor, using bash, find . -type f -exec bash -c ' [[ "$( file -bi "$1" )" == */x-shellscript* ]]' bash {} \; -printAdd -name sunrise before the -exec if you wish to only detect scripts with that name. The find command above will find all regular files in or below the current directory, and for each such file call a short in-line shell script. This script runs file -bi on the found file and exits with a zero exit status if the output of that command contains the string /x-shellscript. If the output does not contain that string, it exits with a non-zero exit status which causes find to continue immediately with the next file. If the file was found to be a shell script, the find command will proceed to output the file's pathname (the -print at the end, which could also be replaced by some other action). The file -bi command will output the MIME type of the file. For a shell script on Linux (and most other systems), this would be something like text/x-shellscript; charset=us-asciiwhile on systems with a slightly older variant of the file utility, it may be application/x-shellscriptThe common bit is the /x-shellscript substring. Note that on macOS, you would have to use file -bI instead of file -bi because of reasons (the -i option does something quite different). The output on macOS is otherwise similar to that of a Linux system.Would you want to perform some custom action on each found shell script, you could do that with another -exec in place of the -print in the find commands above, but it would also be possible to do find . -type f -exec sh -c ' for pathname do case $( file -bi "$pathname" ) in */x-shellscript*) ;; *) continue esac # some code here that acts on "$pathname" done' sh {} +or, with bash, find . -type f -exec bash -c ' for pathname do [[ "$( file -bi "$pathname" )" != */x-shellscript* ]] && continue # some code here that acts on "$pathname" done' bash {} +Related:Understanding the -exec option of `find`
I know I can find files using find: find . -type f -name 'sunrise'. Example result: ./sunrise ./events/sunrise ./astronomy/sunrise ./schedule/sunriseI also know that I can determine the file type of a file: file sunrise. Example result: sunrise: PEM RSA private keyBut how can I find files by file type? For example, my-find . -type f -name 'sunrise' -filetype=bash-script: ./astronomy/sunrise ./schedule/sunrise
How to find files by file type?
file uses several kinds of test:1: If file does not exist, cannot be read, or its file status could not be determined, the output shall indicate that the file was processed, but that its type could not be determined.This will be output like cannot open file: No such file or directory.2: If the file is not a regular file, its file type shall be identified. The file types directory, FIFO, socket, block special, and character special shall be identified as such. Other implementation-defined file types may also be identified. If file is a symbolic link, by default the link shall be resolved and file shall test the type of file referenced by the symbolic link. (See the -h and -i options below.)This will be output like .: directory and /dev/sda: block special. Much of the format for this and the previous point is partially defined by POSIX - you can rely on certain strings being in the output.3: If the length of file is zero, it shall be identified as an empty file.This is foo: empty.4: The file utility shall examine an initial segment of file and shall make a guess at identifying its contents based on position-sensitive tests. (The answer is not guaranteed to be correct; see the -d, -M, and -m options below.) 5: The file utility shall examine file and make a guess at identifying its contents based on context-sensitive default system tests. (The answer is not guaranteed to be correct.)These two use magic number identification and are the most interesting part of the command. A magic number is a special sequence of bytes that's in a known place in a file that identifies its type. Traditionally that place is the first two bytes, but the term has been extended further to include longer strings and other locations. See this other question for more detail about magic numbers in the file command. The file command has a database of these numbers and what type they correspond to; that database is usually in /usr/share/mime/magic, and maps file contents to MIME types. The output there (often part of file -i if you don't get it by default) will be a defined media type or an extension. "Context-sensitive tests" use the same sort of approach, but are a bit fuzzier. None of these are guaranteed to be right, but they're intended to be good guesses. file also has a database mapping those types to names, by which it will know that a file it has identified as application/pdf can be described as a PDF document. Those human-readable names may be localised to another language too. These will always be some high-level description of the file type in a way a person will understand, rather than a machine. The majority of different outputs you can get will come from these stages. You can look at the magic file for a list of supported types and how they're identified - my system knows 376 different types. The names given and the types supported are determined by your system packaging and configuration, and so your system may support more or fewer than mine, but there are generally a lot of them. libmagic also includes additional hard-coded tests in it.6: The file shall be identified as a data file.This is foo: data, when it failed to figure out anything at all about the file. There are also other little tags that can appear. An executable (+x) file will include "executable" in the output, usually comma-separated. The file implementation may also know extra things about some file formats to be able to describe additional points about them, as in your "PDF document, version 1.4".
I need to recognize type of data contained in random files. I am new to Linux. I am planning to use the file command to understand what type of data a file has. I tried that command and got the output below. Someone suggested to me that the file command looks at the initial bytes of a file to determine data type. The file command doesn't look at a file extension at all. Is that correct? I looked at the man page but felt that it was too technical. I would appreciate if anyone can provide a link which has much simpler explanation regarding how the file command works. What are different possible answers that I could get after running the file command? For example, in the transcript below I get JPEG, ISO media, ASCII, etc: The screen output is as follows m7% file date-file.csv date-file.csv: ASCII text, with CRLF line terminators m7% file image-file.JPG image-file.JPG: JPEG image data, EXIF standard m7% file music-file.m4a music-file.m4a: ISO Media, MPEG v4 system, iTunes AAC-LC m7% file numbers-file.txt numbers-file.txt: ASCII text m7% file pdf-file.pdf pdf-file.pdf: PDF document, version 1.4 m7% file text-file.txt text-file.txt: ASCII text m7% file video-file.MOV video-file.MOV: dataUpdate 1 Thanks for answers and they clarified a couple of things for me. So if I understand correctly folder /usr/share/mime/magic has a database that will give me what are the current possible file formats (outputs that I can get when I type file command and follow it by a file). is that correct? Is it true that whenever 'File' command output contains the word "text" it refers to something that you can read with a text viewer, and anything without "text" is some kind of binary?
Linux file command classifying files
That refers to the "magic bytes" which many file formats have at the beginning of a file which show what kind of file this is. E.g. if a file starts with #! then it is considered a script.
I was reading about the file command and I came across something I don't quite understand:file is designed to determine the kind of file being queried.... file accomplishes this by performing three sets of tests on the file in question: filesystem tests, magic tests, language testsWhat are magic tests?
What does “magic tests” mean for the file command?
This behavior is documented on Linux, and required by the POSIX standard. From the file manual on an Ubuntu system: EXIT STATUS file will exit with 0 if the operation was successful or >0 if an error was encoun‐ tered. The following errors cause diagnostic messages, but don't affect the pro‐ gram exit code (as POSIX requires), unless -E is specified: • A file cannot be found • There is no permission to read a file • The file type cannot be determinedWith -E (as noted above): $ file -E saonteuh; echo $? saonteuh: ERROR: cannot stat `saonteuh' (No such file or directory) 1The non-standard -E option on Linux is documented asOn filesystem errors (file not found etc), instead of handling the error as regular output as POSIX mandates and keep going, issue an error message and exit.The POSIX specification for the file utility says (my emphasis):If the file named by the file operand does not exist, cannot be read, or the type of the file named by the file operand cannot be determined, this shall not be considered an error that affects the exit status.
Why does file xxx.src lead to cannot open `xxx.src' (No such file or directory) but has an exit status of 0 (success)? $ file xxx.src ; echo $? xxx.src: cannot open `xxx.src' (No such file or directory) 0Note: to compare with ls: $ ls xxx.src ; echo $? ls: cannot access 'xxx.src': No such file or directory 2
Why does "file xxx.src" lead to "cannot open `xxx.src' (No such file or directory)" but has an exit status of 0 (success)?
Yes, it links itself when it initialises. Technically the dynamic linker doesn’t need object resolution and relocation for itself, since it’s fully resolved as-is, but it does define symbols and it has to take care of those when resolving the binary it’s “interpreting”, and those symbols are updated to point to their implementations in the loaded libraries. In particular, this affects malloc — the linker has a minimal version built-in, with the corresponding symbol, but that’s replaced by the C library’s version once it’s loaded and relocated (or even by an interposed version if there is one), with some care taken to ensure this doesn’t happen at a point where it might break the linker. The gory details are in rtld.c, in the dl_main function. Note however that ld.so has no external dependencies. You can see the symbols involved with nm -D; none of them are undefined. The manpage only refers to entries directly under /lib, i.e. /lib/ld.so (the libc 5 dynamic linker, which supports a.out) and /lib*/ld-linux*.so* (the libc 6 dynamic linker, which supports ELF). The manpage is very specific, and ld.so is not ld-2.28.so. The dynamic linker found on the vast majority of current systems doesn’t include a.out support.file and ldd report different things for the dynamic linker because they have different definitions of what constitutes a statically-linked binary. For ldd, a binary is statically linked if it has no DT_NEEDED symbols, i.e. no undefined symbols. For file, an ELF binary is statically linked if it doesn’t have a PT_DYNAMIC section (this will change in the release of file following 5.37; it now uses the presence of a PT_INTERP section as the indicator of a dynamically-linked binary, which matches the comment in the code). The GNU C library dynamic linker doesn’t have any DT_NEEDED symbols, but it does have a PT_DYNAMIC section (since it is technically a shared library). As a result, ldd (which is the dynamic linker) indicates that it’s statically linked, but file indicates that it’s dynamically linked. It doesn’t have a PT_INTERP section, so the next release of file will also indicate that it’s statically linked. $ ldd /lib64/ld-linux-x86-64.so.2 statically linked$ file $(readlink /lib64/ld-linux-x86-64.so.2) /lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped(with file 5.35) $ file $(readlink /lib64/ld-linux-x86-64.so.2) /lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped(with the currently in-development version of file).
Consider the shared object dependencies of /bin/bash, which includes /lib64/ld-linux-x86-64.so.2 (dynamic linker/loader): ldd /bin/bash linux-vdso.so.1 (0x00007fffd0887000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f57a04e3000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f57a04de000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f57a031d000) /lib64/ld-linux-x86-64.so.2 (0x00007f57a0652000)Inspecting /lib64/ld-linux-x86-64.so.2 shows that it is a symlink to /lib/x86_64-linux-gnu/ld-2.28.so: ls -la /lib64/ld-linux-x86-64.so.2 lrwxrwxrwx 1 root root 32 May 1 19:24 /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.28.soFurthermore, file reports /lib/x86_64-linux-gnu/ld-2.28.so to itself be dynamically linked: file -L /lib64/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, strippedI'd like to know:How can the dynamically linker/loader (/lib64/ld-linux-x86-64.so.2) itself be dynamically linked? Does it link itself at runtime? /lib/x86_64-linux-gnu/ld-2.28.so is documented to handle a.out binaries (man ld.so), but /bin/bash is an ELF executable?The program ld.so handles a.out binaries, a format used long ago; ld-linux.so* (/lib/ld-linux.so.1 for libc5, /lib/ld-linux.so.2 for glibc2) han‐ dles ELF, which everybody has been using for years now.
How can the dynamic linker/loader itself be dynamically linked as reported by `file`?
The file type recognition is driven by so-called magic patterns. The magic file for analyzing TeX family source code contains a number of macro names that cause a file to be classified as LaTeX. Each match is assigned a strength, e. g. 15 in case of \begin and 18 for \chapter. This makes the heuristic more robust against false positives like misclassification of Plain TeX or ConTeXt documents that happen to define their own macros with those names.
I have a number of files (Jupyter notebooks, .ipynb) which are text files. All of these contain some LaTeX markup. But when I run file, I get: $ file nb_* nb_1.ipynb: ASCII text nb_2.ipynb: ASCII text nb_3.ipynb: ASCII text, with very long lines nb_4.ipynb: LaTeX document, ASCII text, with very long lines nb_5.ipynb: text, with very long linesHow does file distinguish these? I would like all files to have the same type.(Why should the files have the same type? I am uploading them to an online system for sharing. The system classifies them somehow and treats them differently, with no possibility for me to change this. I suspect the platform uses file or maybe libmagic internally and would like to work around this.)
How does the file command distinguish text and LaTeX files?
You can use file command: $ file file.png file.png: PNG image data, 734 x 73, 8-bit/color RGB, non-interlaced$ mv file.png file.txt $ file file.txt file.txt: PNG image data, 734 x 73, 8-bit/color RGB, non-interlacedThe file does some tests on file to determine its type. Probably the most important test is comparing a magic number (string in a file header) with pre-defined list.
I have an image archive I keep up. Sometimes, the sites I pull them from reformat the file while keeping the extension the same, most often making PNG images into JPG's that are still named ".png". Is there a way to discover when this has happened and fix it automatically? When on Windows, I used IrfanView for this, but that needs a Wine wrapper.
Finding a file type assuming wrong extension
2-3 files per second tested with file seems very slow to me. file actually performs a number of different tests to try and determine the file type. Since you are looking for one particular type of file (sqlite), and you don't care about identifying all the others, you can experiment on a known sqlite file to determine which test actually identifies it. You can then exclude the others using the -e flag, and run against your full file set. See the man page: -e, --exclude testname Exclude the test named in testname from the list of tests made to determine the file type. Valid test names are: apptype EMX application type (only on EMX). text Various types of text files (this test will try to guess the text encoding, irrespective of the setting of the ‘encoding’ option). encoding Different text encodings for soft magic tests. tokens Looks for known tokens inside text files. cdf Prints details of Compound Document Files. compress Checks for, and looks inside, compressed files. elf Prints ELF file details. soft Consults magic files. tar Examines tar files.Edit: I tried some tests myself. Summary:Applying my advice with the right flags can speed up file by about 15%, for tests to determine sqlite. Which is something, but not the huge improvement I expected. Your file tests are really slow. I did 500 on a standard machine in the time you did 2-3. Are you on slow hardware, or checking enormous files, running an ancient version of file, or...? You must keep the 'soft' test to successfully identify a file as sqlite.For a 16MB sqlite DB file, I did: #!/bin/bash for i in {1..1000} do file sqllite_file.db | tail > out doneTiming on the command line: ~/tmp$ time ./test_file_times.sh; cat outreal 0m2.424s user 0m0.040s sys 0m0.288s sqllite_file.db: SQLite 3.x databaseTrying the different test excludes, and assuming the determination is made based on a single test, it is the 'soft' (i.e. magic file lookup) test which identifies the file. Accordingly, I modified the file command to exclude all the other tests: file -e apptype -e ascii -e encoding -e tokens -e cdf -e compress -e elf -e tar sqllite_file.db | tail > outRunning this 1000 times: ~/tmp$ time ./test_file_times.sh; cat outreal 0m2.119s user 0m0.060s sys 0m0.280s sqllite_file.db: SQLite 3.x database
I am looking for a way to determine file types in a folder with thousands of files. File names do not reveal much and have no extension, but are different types. Specifically, I am trying to determine if a file is a sqlite database. When using the file command, it determines the type of 2-3 files per second. This seems like a good way to address the problem, except it is too slow. Then I tried opening each file with sqlite3 and checking to see if I get an error. That way, I can check 4-5 files per second. Much better, but I think that there might be a better way to do this.
Fast way to determine if a file is a SQLite database
Unfortunately, there is probably nothing you can do to make file produce the correct output. The file command tests the first few bytes of a file against a database of magic numbers. That is easy to check for in binary files (like images or executables) which have some specific identifiers at the beginning of the file. If the file is not a binary file, it will check the encoding as well as look for some specific words in the file to determine the type, but only for a limited number of file types (most of which are programming languages).
Why doesn't the following return text/csv? $ echo 'foo,bar\nbaz,quux' > temp.csv;file -b --mime temp.csv text/plain; charset=us-asciiI used this example for extra clarity but I'm also experiencing the problem with other CSV files. $ file -b --mime '/Users/jasonswett/projects/client_work/gd/spec/test_files/wtf.csv' text/plain; charset=us-asciiWhy doesn't it think the CSV is a CSV? Is there anything I can do to the CSV to make file return the "right" thing?
file command apparently returning wrong MIME type
Grab the source of the file command. Most if not all open sources unices use this one. The file command comes with the magic database, named after the magic numbers that it describes. (This database is also installed on your live system, but in a compiled form.) Look for the file that contains the description text that you see: grep 'Berkeley DB' magic/Magdir/*The magic man page describes the format of the file. The trigger lines for “Berkeley DB” are 0 long 0x00061561 Berkeley DB 0 belong 0x00061561 Berkeley DB 12 long 0x00061561 Berkeley DB 12 belong 0x00061561 Berkeley DB 12 lelong 0x00061561 Berkeley DB 12 long 0x00053162 Berkeley DB 12 belong 0x00053162 Berkeley DB 12 lelong 0x00053162 Berkeley DB 12 long 0x00042253 Berkeley DB 12 belong 0x00042253 Berkeley DB 12 lelong 0x00042253 Berkeley DB 12 long 0x00040988 Berkeley DB 12 belong 0x00040988 Berkeley DB 12 lelong 0x00040988 Berkeley DBThe first column specifies the offset at which a certain byte sequence is to be found. The third column contains the byte sequence. The second column describes the type of byte sequence: long means 4 bytes in the platform's endianness; lelong and belong mean 4 bytes in little-endian and big-endian order respectively. Rather than replicate the rules, you may want to call the file utility; it's specified by POSIX, but the formats that it recognizes and the descriptions that it outputs aren't. Alternatively, you can link to libmagic and call the magic_file or magic_buffer function.
I'm running file against a wallet.dat file (A file that Bitcoin keeps its private keys in) and even though there doesn't seem to be any identifiable header or string, file can still tell that it's a Berkley DB file, even if I cut it down to 16 bytes. I know that file was applying some sort of rule or searching for some sequence to identify it. I want to know what the rule it's applying here is, so that I can duplicate it in my own program.
How did file identify this particular file?
You can use the -m option to specify an alternate list of magic files, and if you include your own before the compiled magic file (/usr/share/file/magic.mgc on my system) in that list, those patterns will be tested before the "global" ones. You can create a function, or an alias, to transparently always transparently use that option by just issuing the file command. The language used in magic file is quite powerful, so there is seldom a need to revert to custom C coding. The only time I felt inclined to do so was in the 90's when matching HTML and XML files was difficult because there was no way (at that time) to have the flexible casing and offset matching necessary to be able to parse <HTML and < Html and < html with one pattern. I implemented that in C as modifier to the 'string' pattern, allowing the ignoring of case and compacting of (optional) blanks. These changes in C required adaptation of the magic files as well. And unless the file source code has significantly changed since then, you will always need to modify (or provide extra) rules in magic files that match those C code changes. So you might as well start out trying to do it with changes to the magic files only, and fall back to changing the C code if that really doesn't work out.
Can I use file and magic ( http://linux.die.net/man/5/magic ) to override the description of some other known formats ? for example, I would like to describe the following formats:BED: http://genome.ucsc.edu/FAQ/FAQformat.html#format1 Fasta : http://en.wikipedia.org/wiki/FASTA_format ...that are 'just' text file Or BAM http://genome.ucsc.edu/FAQ/FAQformat.html#format5.1 that is 'just' a gzipped-file starting with the magic-number BAM\1 ? do you know any example ? Is it possible to provide a custom C code to test the file instead of using the magic format ?
file(1) and magic(5) : describing other formats
The problem occurs in cut -d\ -f2. Change it to cut -d\ -f2. To cut, the arguments look like this: # bash: args(){ for i; do printf '%q \\\n' "$i"; done; } # args cut -d\ -f2 cut \ -d\ -f2 \And here is the problem. \ escaped the space to a space literal instead of a delimiter between arguments in your shell, and you didn't add an extra space so the whole -d\ -f2 part appears as one argument. You should add one extra space so -d\ and -f2 appear as two arguments. To avoid confusion, many people use quotes like -d' ' instead. P.S.: Instead of using file and making everything ASCII, I'd rather use if file "$attachment2" | grep -q text$; then # is text else # file doesn't think it's text fi
I am writing a menu based bash script, one of the menu options is to send an email with a text file attachment. I am having trouble with checking if my file is a text file. Here is what I have: fileExists=10 until [ $fileExists -eq 9 ] do echo "Please enter the name of the file you want to attach: " read attachment isFile=$(file $attachment | cut -d\ -f2) if [[ $isFile = "ASCII" ]] then fileExists=0 else echo "$attachment is not a text file, please use a different file" fi doneI keep getting the error cut: delimiter must be a single character.
Bash script: check if a file is a text file [closed]
The following command lists the lines in list_file that contain the name of an image file: <list_file xargs -d \\n file -i | sed -n 's!: *image/[^ :]*$!!p'file -i FOO looks at the first few bytes of FOO to determine its format and prints a line like FOO: image/jpeg (-i means to show a MIME type; it's specific to GNU file as found on Linux). xargs -d \\n reads a list of files (one per line) from standard input and applies the subsequent command to it. (This requires GNU xargs as found on Linux; on other systems, leave out -d \\n, but then the file list can't contain \'" or whitespace). The sed command filters out the : image/FOO suffix so as to just display the file names. It ignores lines that don't correspond to image files.
I have a list of files and I need to find all the image-files from that list. For example, if my list contained the following: pidgin.tar.gz photo01.jpg picture01 screenshot.gif invoice.pdfThen I would like only to select: photo01.jpg picture01 screenshot.gifNotes: Method must not be dependant on file extensions Obscure image formats for Photoshop and Gimp can be ignored. ( If feh can't show it, its not a image )
How to find image files by content
Using the case statement and command substitution : for file in *; do case $(file --mime-type -b "$file") in image/*g) ... ;; text/plain) ... ;; application/xml) ... ;; application/zip) ... ;; *) ... ;; esac doneCheck : http://mywiki.wooledge.org/BashFAQ/002 http://mywiki.wooledge.org/CommandSubstitution http://mywiki.wooledge.org/BashGuide/TestsAndConditionals#Choices http://wiki.bash-hackers.org/syntax/ccmd/case EDIT if you insist to don't use case but an if statement using bash : if [[ $(file --mime-type -b "$file") == image/*g ]]; then ... else ... fi
I want to do a loop for all the images in a directory. The images doesn't have extension so I have to read the first bytes of the image to know its type. The loop should end up being something like. for file in * do if [ file --mime-type -b ] then *** fi done
How to check the file type in a script
I can't think of an all-in-one tool, but there are programs that can cope with a large array of files of a given category. For example, p7zip recognizes a large number of archive formats, so if you suspect that a file is an archive, try running 7z l on it. $ 7z l ta12b563enu.exe … Type = Cab Method = MSZip …If you suspect that a file is an image, try ImageMagick. $ identify keyboard.jpg.gz keyboard.jpg.gz=>/tmp/magick-XXV8aR5R JPEG 639x426 639x426+0+0 8-bit DirectClass 37.5KB 0.000u 0:00.000For audio or video files, try mplayer -identify -frames 0. If you find a file that file can't identify, you might make a feature request to the author of your magic library.
Sometimes it seems that the standard file command (5.04 on my Ubuntu system) is not sophisticated enough (or I am just using it wrong, which could well be). For example when I run it on an .exe file, and I am quite positive that it contains some archive, I would expect output like this: $ improved-file foo.exe foo.exe: PE32 executable for MS Windows (GUI) Intel 80386 32-bit .zip archive included (just use unzip to extract)Other issues:It doesn't detect concatenations of different formats It doesn't detect common file formats, e.g. .epub, which is just a .zip container with some standardized .xml files etc. inside (file displays 'data')An example of such a .exe file containing an archive - I guessed some archive-formats and tried the corresponding unpack-commands with a trial'n'error approach - which worked in the end - but I would rather prefer a more auto-inspection oriented workflow.
More sophisticated file command for deep inspection?
The type detection information isn't actually embedded in the file program, the file program just reads the magic file and then searches the signatures in that file to see what matches. The magic file exists both as a compiled version, magic.mgc, and as the original source that is human readable and is just called magic. On my Fedora based systems these can be found at: /usr/share/misc/magic /usr/share/misc/magic.mgcMore information on the format of the file can be found in the magic(5) manual page.
Searching, googling, I could not find any information about file types recognized by file. For example, an *.mp4 file is identified as "ISO Media" (while being able to play with VLC normally). This is not 100% clear, it somehow leaves me to wonder whether it's a correct detection or the file is confused for ISO image. (Either because e.g. the sample is somehow corrupted, or, just that the algorithm is not 100% accurate for all types.) My problem is that I need to set up some rules for switching based on file type. I have set created a sample file set, but I cannot collect enough samples of all types which I need to be recognized by my code. And the real set will probably be really huge. It would be enough for me if I could read some comments to use as a reference to those types which are not so obvious. But to my surprise, I could not find any useful information. Most of my searches ended on magic file format specification, which is not really helpful to me. I'm interested in the magic file which is distributed with, say, Debian.
How to find human-readable information about file types recognized by `file`?
Regexes in man magic are not extensively detailed. But have a look at this brilliant answer by JigglyNaga. For a start you need to escape several characters in regexes in magic files: ^, + and spaces are examples. Here are two ways of making your magic file work for the files you describe: 0 string CAD\n CAD-Drawing >&0 regex \^A[0-9]\+ Format=[%s] >>&0 search \n >>>&0 regex \^[a-z]\+ Units=[%s]This ignores the spaces, and therefore will print the following: $ file -m mmm file[12].cad file1.cad: CAD-Drawing Format=[A1] Units=[mm] file2.cad: CAD-Drawing Format=[A00] Units=[m]A better way (in my humble opinion) is to keep spaces in the Version and Units strings: 0 string CAD\n CAD-Drawing >&0 regex \^A[0-9\ ]\+ Format=[%s] >>&0 search \n >>>&0 regex \^[a-z\ ]\+ Units=[%s](Note that I needed to escape the spaces even that they're inside character groups). This prints the following: $ file -m mmm file[12].cad file1.cad: CAD-Drawing Format=[A1 ] Units=[mm] file2.cad: CAD-Drawing Format=[A00] Units=[m ]References:magic example using search and/or regex
I would like to write the format of a drawing with "file drawing" Files start like: CAD A1 mm(Blank after A1) or: CAD A00 m (Blank after "m") I tried somethings like in the file magic: 0 string CAD\n CAD-Drawing >&0 regex ^A[0-9]+ Format=[%s] >>&0 search \n >>>&0 regex ^[a-z]+ Units=[%s]but with no luck! Is there any way to solve this problem? I would prefer to get no blanks. That means not just: 0 string CAD\n CAD-Drawing >&0 string x Format=[%s] >>&0 string x Units=[%s]which results in ... Format=[A1 ] ... or ... Units=[m ]
how can I read a number with file (magic)?