social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

489
active users

#cron

1 post1 participant0 posts today

Hay there fediverse. Do we have any #bash #cron experts that can help me figure out how to fix things?

TL;DR - I need to reference the `$USER` variable as a parameter in a shell script that forms part of a dynamic path in a backup routine (the location varies by host so trying to automate as much as possible). Running the script as a logged in user over SSH works perfectly but when I add a crontab entry under the same user to run the same script is fails to parse the `$USER` part so the destination is incorrect and it throws an error.

I should also note that if I specify the path manually in the script it also works, but then it's a pain to automate across different hosts so I'm trying to avoid that if possible.

I now been going round in circles trying to figure this out for a couple of days so it's time to ask for help.

I forgot you can't just run a shell script with rsync calls via cron on macOS because of... security. I think my workaround will do though.

I write a shell script and then create an Automator application that calls the shell script. I then add a cron job to open the application.

I need to check if it runs when the screen is locked. I'm pretty sure it does but I will test again.

#rsync#cron#macOS

So I needed to run some periodic backup jobs, both for personal and professional needs. If were you ever tasked with such a request, you probably looked at cron. But cron has shortcoming: it does not survive power off events, it does not support any logs, and you can’t easily tell when, and if it was ran.

Meet systemd timers. A modern approach to running cron-like job-scheduling.

yieldcode.blog/post/working-wi

yield code(); · Working with systemd timers - Dmitry Kudryavtsev The other day I thought to myself that it would be a good idea to have some backups of my data. So I was wondering, how would I execute a periodic backup task?

#question #bash #crontab #mysql :

j'ai un script qui fait

for db in database1 database2 database3;
do
echo $db
if [[ "$db" != "database2" ]]; then
echo "dumping database $db"
mysqldump -u $USER -p$PASSWORD --databases $db > $db.sql
fi
done

quand je le lance directement, pas de souci, ça fonctionne. mais en tâche #cron, rien, que dalle, que pouic, mon cul dans les orties.
et je ne vois pas pourquoi.

Une idée ?

Replied in thread

@gerowen good story - i have to pull trigger on vps and implement martial law study regime, even on wkds. the server will be a needed mini timesink. it may be a case of the new normal - study for certs until I get a few and then keep studying and test taking indefinitely until right money and right situation magically show up #magical thinking #routine #cron

A couple months back, I bought an external (Samsung) USB SSD. It's a bit aggressive about spinning down for power-saving. As a result, if I want Backblaze to be able to read it as part of its regular activities, I have to do (in #WSL2):

while true
do
  touch /mnt/g/BB-backup.touch
  printf .
  sleep 10
done

Every few days to keep the drive online. Otherwise, I get the nag-email telling me "we haven't been able to
#backup your G: drive in a couple weeks"

I can't actually
#cron it out since the cron service won't run in the WSL instance (no systemd/dbus), but I could always replace it with a #PowerShell script that I put into my Windows laptop's scheduler …But I fucking hate writing PowerShell.

#Linux
www.backblaze.comThe Leading Open Cloud Storage Platform - BackblazeBackblaze is a pioneer in robust, scalable low cost cloud backup and storage services. Enterprise hot storage, low cost backup and archive, and more.

Anyone familiar with #Cron jobs running shell scripts? Please help!

I've got a simple shell script that runs fine manually from the command line but won't from a Cron job. When searching for an answer, it seems to be a common problem but without a common solution. Any ideas?

I have tried / ensured correct:

• #!/bin/bash or #!/bin/sh first line of script
• chmod +x or chmod 755 on script to ensure file permissions are rwx for user
• owner of file is same user as the Cron running the job
• Cron job created using crontab -e whilst logged in as owner of file
• added (and removed) /bin/bash or /bin/sh before the location of the script to run in Cron
• tried cd /home/mastodon && ./purge_cache.sh instead, same outcome
• I am running the latest version of Bookworm (#Debian) on a #RaspberryPi 4 (4GB RAM, 512GB SD)

Cron:
0 2 * * * /home/mastodon/purge_cache.sh

Shell script:
#!/bin/sh
RAILS_ENV=production /home/mastodon/live/bin/tootctl preview_cards remove --days 7
curl -fsS -m 10 --retry 5 -o /dev/null hc-ping.com/example1
RAILS_ENV=production /home/mastodon/live/bin/tootctl media remove --days 7 --remove-headers
curl -fsS -m 10 --retry 5 -o /dev/null hc-ping.com/example2
RAILS_ENV=production /home/mastodon/live/bin/tootctl media remove --days 7
curl -fsS -m 10 --retry 5 -o /dev/null hc-ping.com/example3

Note: The tootctl lines flush different media cache older than 7 days (taken from #Mastodon docs); the curl lines ping healthchecks.io (a Cron job monitor) to say the job ran successfully.