Ubuntu keeps remounting /dev/shm with different mounting options periodically
Unraveling the Mystery: Ubuntu’s Persistent Remounting of /dev/shm
The behavior of /dev/shm
being periodically remounted with varying options on your Ubuntu 24.04.2 system, despite no explicit configuration in /etc/fstab
, can be a perplexing issue. This intermittent remounting, occurring roughly every ten seconds, suggests an underlying system process or configuration is dynamically managing this crucial mount point. At revWhiteShadow, we understand the frustration this can cause, especially when standard troubleshooting methods like examining fstab
, dmesg
, journalctl
, or even strace
on init
(PID 1) don’t yield immediate answers. The fact that disabling AppArmor also did not resolve the issue points to a more fundamental mechanism at play. We’ve observed similar behavior across different Ubuntu kernel versions, and the discrepancy you noted between your system and another running a similar kernel version highlights the subtle yet significant differences that can arise from specific system configurations or installed software.
This comprehensive guide aims to demystify the recurring remounting of /dev/shm
, offering detailed insights and actionable steps to pinpoint and resolve the root cause. We will delve into the typical mechanisms that manage ephemeral mounts like /dev/shm
in modern Linux distributions, and explore less common but potent influences that could be contributing to this behavior. Our goal is to provide you with the knowledge and tools to confidently diagnose and rectify this anomaly, ensuring a stable and predictable system environment.
Understanding the Dynamic Nature of /dev/shm in Modern Ubuntu
In contemporary Linux systems, particularly those leveraging systemd, the management of mount points often extends beyond the static entries in /etc/fstab
. For ephemeral filesystems like /dev/shm
(shared memory), systemd often plays a central role in their lifecycle and configuration. /dev/shm
is typically mounted as a tmpfs
filesystem, which means it resides entirely in RAM and is therefore volatile, cleared upon system reboot. The mounting options observed, such as rw,nosuid,nodev,size=4011076k,nr_inodes=1002769,inode64
, are characteristic of a tmpfs
mount managed by the operating system’s initialization process.
The frequent remounting suggests that some service or script is actively re-applying these mount options, potentially re-creating the mount point or modifying its attributes. Since you’ve confirmed no direct fstab
entry, we must look at other system-level configurations.
The Role of Systemd Mount Units
Systemd is a comprehensive system and service manager for Linux operating systems. It utilizes unit files to manage various system components, including mount points. These mount units are often generated dynamically or can be defined in specific system directories.
Investigating Dynamically Generated Mount Units
Systemd can automatically generate mount units for devices discovered during the boot process or for certain standard mount points. The presence of tmpfs
for /dev/shm
is a prime example of this. We need to identify if a specific systemd service is responsible for this remounting behavior.
Identifying Active Mount Units: We can query systemd for all active mount units using the command:
systemctl list-units --type=mount
While
/dev/shm
is usually managed implicitly, it’s worth examining the output for any unusual entries or mount points that might be related or trigger reconfigurations.Examining the
systemd-tmpfiles
Service: Thesystemd-tmpfiles
service is responsible for managing temporary files and directories according to predefined rules. These rules are typically located in/usr/lib/tmpfiles.d/
,/run/tmpfiles.d/
, and/etc/tmpfiles.d/
. Thetmpfiles.d
mechanism is a common way to configure the creation and management of temporary files and directories, including mount points.To investigate further, we should scrutinize the configuration files in these directories. Pay particular attention to any files that might contain directives related to
/dev/shm
. A common pattern for definingtmpfs
mounts is through.conf
files in these directories. For instance, a file might contain an entry like:d /dev/shm 0755 root root 10d
However, for
tmpfs
mounts, the systemd unit files are more commonly used for explicit mounting options.
Systemd Mount Unit for /dev/shm
The most probable culprit for managed tmpfs
mounts like /dev/shm
are systemd mount units. These units are automatically generated for entries in /etc/fstab
but can also be created independently. We need to find the specific unit file that governs /dev/shm
.
Locating the Systemd Mount Unit: You can often find information about which unit is responsible for a mount point using
systemctl status <mount_point>
. However, for implicitly managed mounts, this might not directly point to a file. A more effective approach is to look for systemd mount units that explicitly reference/dev/shm
.We can try to find systemd unit files that mention
/dev/shm
by searching within the systemd unit directories:sudo find /etc/systemd/system /usr/lib/systemd/system -name "*.mount" -exec grep -l "/dev/shm" {} \;
This command will search all
.mount
unit files in the systemd configuration directories for any mention of/dev/shm
. The output of this command will be crucial in identifying the specific unit file responsible.Once you identify a potential unit file (e.g.,
/usr/lib/systemd/system/dev-shm.mount
), you can inspect its contents:cat /usr/lib/systemd/system/dev-shm.mount
This file would typically contain a
[Mount]
section with directives likeWhat=/dev/shm
andType=tmpfs
. Crucially, it might also includeOptions=rw,nosuid,nodev,size=...
or similar. If this file exists and is active, it’s a strong candidate for managing/dev/shm
.Understanding
systemd-run
and Dynamic Mounts: It’s also possible that a service is usingsystemd-run
to dynamically mount/dev/shm
with specific options as part of its operation. This is less common for a system-wide mount point like/dev/shm
, but not entirely impossible.
Investigating Mount Options and System Behavior
The fact that the mount options themselves are changing, specifically the presence or absence of size=
and nr_inodes=
parameters, is a critical clue. This suggests that the mount is not just being repeatedly applied with fixed options, but that the options are being recalculated or determined dynamically each time.
mountd
and Shared Memory Management
While less common on modern systems for /dev/shm
which is typically handled by tmpfs
, some older or specialized configurations might involve a daemon like mountd
(though this is usually associated with NFS). However, for tmpfs
, the kernel itself and systemd’s mount management are the primary actors.
Kernel Parameters and /proc/sys
The kernel itself manages tmpfs
mounts. Parameters related to tmpfs
can be tuned via /proc/sys/fs/tmpfs/
. While these typically set system-wide defaults, it’s worth checking if any unusual configurations have been applied:
- Check
tmpfs
Kernel Parameters:This will display all tunable kernel parameters related tosysctl -a | grep tmpfs
tmpfs
. Look for parameters likefs.tmpfs.size
orfs.tmpfs.nr_inodes
. If these are set to specific values, they might influence howtmpfs
mounts are created. However, dynamic changes to these specific mount options (like thesize
parameter changing from one specific value to another) are less likely to be controlled by these globalsysctl
values.
Systemd Services Triggering Remounts
If a systemd unit file isn’t the direct cause, then a running service might be responsible for repeatedly issuing the mount
command. We need to identify any processes that are actively interacting with /dev/shm
.
Tracing System Activity with auditd
or strace
When direct observation via journalctl
or dmesg
fails, more granular tracing tools become essential.
Using
auditd
for Mount Operations: The audit daemon (auditd
) can be configured to log specific system calls, includingmount
. This is a powerful tool for tracking who is performing the mount operation.- Install
auditd
if not already installed:sudo apt update sudo apt install auditd audispd-plugins
- Configure an audit rule to watch for
mount
system calls: You can add a rule to/etc/audit/rules.d/mount.rules
(or create a new file like10-mount.rules
):
This rule watches for write operations on-w /dev/shm -p w -k shm_watch
/dev/shm
. However, a more direct approach for mount calls is needed. A better rule would be to watch themount
system call itself. For instance:
After adding the rule, reload the audit rules:## Watch for mount syscalls on /dev/shm -a always,exit -F path=/dev/shm -S mount -k shm_mount_watch
sudo augenrules --load
- Monitor audit logs:
You can then monitor the audit logs in real-time:Or, to see live events:
sudo ausearch -f /dev/shm -i -t recent -k shm_mount_watch
The audit logs will show the process ID (PID) and the executable path that initiated thesudo auditctl -l # List current rules sudo auditctl -s # Show status sudo ausearch -f /dev/shm -i | grep "syscall=mount"
mount
system call. This is often the most direct way to identify the responsible process.
- Install
Using
strace
on Specific Processes: Whilestrace
on PID 1 (init
) can be overwhelming, you can selectivelystrace
processes that seem suspicious or have high CPU usage. If you can narrow down potential candidate processes, attachingstrace
to them can reveal theirmount
system calls.However, given the rapid, periodic nature of the remount, it’s more likely that a background service or a scheduled task is involved.
Cron Jobs and Systemd Timers
Scheduled tasks are a common source of periodic system activity.
Checking Cron Jobs:
- User Cron Jobs:(For each user, especially root).
crontab -l
- System-wide Cron Jobs:
Check files in
/etc/cron.d/
,/etc/cron.hourly/
,/etc/cron.daily/
,/etc/cron.weekly/
,/etc/cron.monthly/
.
- User Cron Jobs:
Checking Systemd Timers: Systemd timers are the modern equivalent of cron jobs.
systemctl list-timers --all
This command will list all active and expired timers. Examine the output for any timers that might be triggering actions related to mounts or system management at short intervals. If a timer is found, investigate the corresponding
.service
unit it activates.
Third-Party Software and Configuration Conflicts
Sometimes, third-party applications or system management tools can interfere with standard system behavior.
Review Recently Installed Software: If this issue started after installing new software, that software is a prime suspect. Consider temporarily uninstalling recently added packages to see if the behavior changes.
Containerization and Virtualization: If you are using Docker, Podman, LXC, or other containerization technologies, these often manage their own namespaces and mounts, including
/dev/shm
. While they typically don’t interfere with the host’s/dev/shm
in this manner, a misconfiguration in a container’s setup or a host-level service managing these containers could be the cause.- Docker: If Docker is running, check its configurations and any volumes or shared memory settings for your containers.
- Snap Packages: Ubuntu heavily utilizes Snap packages. Some snaps run in their own isolated environments. While less likely to affect the host’s
/dev/shm
directly, it’s worth considering if a snap is behaving unexpectedly.
Kernel Modules and Device Management
While less probable for a standard tmpfs
mount, certain kernel modules can influence how filesystems are managed.
- Check Loaded Kernel Modules:Look for any unusual or recently loaded modules, especially those related to filesystem management or system monitoring.
lsmod
AppArmor and SELinux (Revisited)
Although you mentioned disabling AppArmor, it’s worth a final consideration, especially regarding how system services are profiled. Even with AppArmor disabled, remnants of its configuration or other mandatory access control systems like SELinux (though less common on default Ubuntu) could theoretically cause unexpected behavior if they are trying to enforce policies on processes that manage /dev/shm
.
Checking Systemd Service Configuration for Restrictive Options
Systemd service files themselves can specify various directives that affect how processes are run, including mount options for specific namespaces. If a service is configured to run with particular mount options or constraints, this could manifest as remounting.
- Examine Systemd Service Files: If you identify a service through
auditd
or timer analysis that seems responsible, examine its unit file (typically in/etc/systemd/system/
or/usr/lib/systemd/system/
). Look for directives within the[Service]
section that might relate to mount options or filesystem access.
Diagnosing the Changing Mount Options: size=
and nr_inodes=
The fact that the size
and nr_inodes
parameters are changing is a very strong indicator that whatever is remounting /dev/shm
is doing so with dynamic calculations or based on system load.
System Memory Pressure:
tmpfs
mounts are often sized dynamically based on available system memory. If there are processes that frequently allocate and deallocate large amounts of memory within/dev/shm
, this could trigger the system to adjust thetmpfs
size. However, a remount every 10 seconds suggests a more programmatic approach than just dynamic sizing due to memory pressure.Specific Application Behavior: Certain applications, particularly those that heavily rely on shared memory for inter-process communication or caching, might have specific routines that interact with
/dev/shm
in a way that triggers these remounts. Identifying such an application through process monitoring (e.g.,top
,htop
,ps
) during the suspected remount intervals could be key.
Advanced Troubleshooting: System Call Tracing with strace
If all else fails, a very targeted use of strace
can sometimes reveal the exact command being executed.
Attaching strace
to the mount
System Call
We can use strace
to specifically monitor all instances of the mount
system call and capture its arguments.
Find the PID of the process: Use
auditd
as described earlier to identify the PID. Let’s assume you found a PID, for example,1234
.Attach
strace
:sudo strace -p 1234 -s 1024 -f -e trace=mount
-p 1234
: Attach to process ID 1234.-s 1024
: Set the string length to 1024 characters to ensure full mount options are captured.-f
: Follow child processes.-e trace=mount
: Only tracemount
system calls.
Observe the output closely. You will see lines like:
mount("/dev/shm", "/dev/shm", "tmpfs", 0, "rw,nosuid,nodev,size=4011076k,nr_inodes=1002769,inode64") = 0
This will show precisely what options are being used for the remount.
Putting it All Together: A Step-by-Step Diagnostic Plan
Given the evidence and potential causes, we recommend the following systematic approach:
Identify the Responsible Process with
auditd
: This is the most critical first step. Configureauditd
to specifically watch for themount
system call targeting/dev/shm
. Analyze the audit logs to find the PID and executable responsible.- Rule Example:
-a always,exit -F path=/dev/shm -S mount -k shm_mount_watch
- Monitoring:
sudo ausearch -f /dev/shm -i | grep "syscall=mount"
- Rule Example:
Investigate Systemd Mount Units: While
auditd
is running, check if there’s an explicit systemd mount unit for/dev/shm
.sudo find /etc/systemd/system /usr/lib/systemd/system -name "*.mount" -exec grep -l "/dev/shm" {} \;
If a unit is found (e.g.,
dev-shm.mount
), examine its contents and its status:systemctl status dev-shm.mount
Review Systemd Timers and Cron Jobs: If
auditd
doesn’t point to an obvious service, investigate scheduled tasks that might be running at the observed interval.systemctl list-timers --all
- Check files in
/etc/cron.d/
,/etc/cron.hourly/
, etc. crontab -l
for all users.
Analyze the Identified Process: Once the process is identified:
- If it’s a Systemd Service: Examine its unit file (
.service
) for any directives related to mounts or system resource management. - If it’s a User Application: Check its configuration files or logs for any settings related to shared memory or mounting.
- If it’s a System Daemon: Consult its documentation or online resources to understand its behavior concerning
/dev/shm
.
- If it’s a Systemd Service: Examine its unit file (
Monitor System Load and Memory Usage: Observe system resource usage during the remount intervals. High CPU or memory spikes associated with the identified process could provide further clues.
htop
ortop
Examine
tmpfs
Kernel Parameters: While unlikely to cause frequent remounts, ensure no unusual global settings are applied.sysctl -a | grep tmpfs
Consider Snap and Containerization Impact: If your system heavily utilizes snaps or containers, investigate their configurations and potential misbehaviors.
Proactive Measures and Prevention
Once the root cause is identified, you can implement a definitive solution. This might involve:
- Modifying or disabling the problematic systemd service or timer.
- Adjusting application configurations.
- Creating a more specific systemd mount unit that overrides the behavior. For instance, if the remounting is disruptive, you could potentially create a custom systemd mount unit that defines your desired options for
/dev/shm
and ensure it has a higher priority or a dependency that prevents the rogue remounting.
By systematically applying these diagnostic steps, you should be able to pinpoint the exact mechanism responsible for the periodic remounting of /dev/shm
on your Ubuntu system. At revWhiteShadow, we are committed to providing in-depth technical guidance to help you overcome these challenging system administration issues and maintain a well-functioning environment. This thorough approach will not only resolve the immediate problem but also enhance your understanding of Ubuntu’s intricate system management.