MantisBT

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000676vera[All Projects] Issuespublic1969-12-31 16:332010-09-09 09:38
Reportermicasaverde 
Assigned Toit_team 
PrioritynormalSeveritymajorReproducibilityhave not tried
StatusresolvedResolutionfixed 
PlatformOSOS Version
Summary0000676: memory leak in tmpfs
DescriptionStill occasionally happens to network monitor logs...

so we don't lose this issue:

YESTERDAY:
paid tunnel script use pipes

it seems that ntpclient use 2,8Mb and nas about 1,6Mb, but I don't see anywhere for those to log.


TODAYS:
I had 2,4Mb usage from which NetworkMonitor.log used 1,4Mb
I've ran RotateLogs and after I had 2,5Mb and the NetworkMOnitor.log file wasn't there anymore

I've stopped almost everything and no change (LuaUpnp, CulrQueue, networkMonitor, lighttpd, dropbear, ntpclient) until I've stopped udhcpc and it free up 2,1Mb.

I'll restart again the boxes, but I think the problem is from the tmpfs drivers or kernel because it doesn't seem to be reproducible on the same app
and it happens for different apps.
I'll have to test more and try to find the cause.
- Hide quoted text -


Aaron Bergen wrote:
CJ,

I logged into a customer's system that's going very slow and did a df -h and it shows 7mb in use on /tmp, but the logs were only using 2mb, and du -s -h on tmp and all the sub directories showed no extra data. So I figured some process had opened a file, the file had been removed, but the process hadn't released the file handle. That's what usually causes 'phanton' disk usage. You can tell what process is blocking the inode because when you kill the process the disk space frees up. So I went through and started killing processes, and sure enough, it was the process ssh for the paid tunnels that seemed to be occupying almost 2MB on /tmp. Here's the ssh capture:

root@HomeControl:/tmp# df -h
Filesystem Size Used Available Use% Mounted on
tmpfs 14.9M 5.0M 9.8M 34% /tmp
/dev/mtdblock/4 1.8M 560.0k 1.3M 30% /jffs
mini_fo:/jffs 5.5M 5.5M 0 100% /
root@HomeControl:/tmp# ps aux
 PID Uid VSZ Stat Command
   1 root 2360 S init
   2 root SW [keventd]
   3 root SWN [ksoftirqd_CPU0]
   4 root SW [kswapd]
   5 root SW [bdflush]
   6 root SW [kupdated]
   8 root SW [mtdblockd]
  53 root SWN [jffs2_gcd_mtd4]
  69 root 2360 S init
  77 root 2484 S /sbin/syslogd -C128 -m 0
  79 root 2356 S /sbin/klogd
 255 root SW [khubd]
 662 root 2368 S crond -c /etc/crontabs
 694 root 2380 S udhcpc -t 0 -i eth0.1 -b -p /var/run/eth0.1.pid -R
 695 root 1868 S /usr/sbin/dropbear -p 22
 696 nobody 1208 S /usr/sbin/dnsmasq -D -y -Z -b -E -s lan -S /lan/ -l /
 729 root 5944 S lighttpd -f /etc/lighttpd.conf
 1028 root 2372 S /bin/ash /usr/bin/cmh-ra-daemon.sh
 1056 root 2036 S ssh -p 232 -T -y -i /etc/cmh-ra/keys/cmh-ra-key.priv
 6469 root 1996 S ssh -y -T -p 232 -i /etc/cmh/ra_key -R 10390:127.0.0.
 6470 root 1032 S /usr/bin/charperiod
 6528 root 1928 S /usr/sbin/dropbear -p 22
 6539 root 2380 S -ash
 7573 root 1312 S /usr/sbin/ntpclient -i 60 -s -l -D -p 123 -h 0.openwr
 7727 root 2364 R ps aux
root@HomeControl:/tmp# kill 1056
root@HomeControl:/tmp# df -h
Filesystem Size Used Available Use% Mounted on
tmpfs 14.9M 3.6M 11.3M 24% /tmp
/dev/mtdblock/4 1.8M 560.0k 1.3M 30% /jffs
mini_fo:/jffs 5.5M 5.5M 0 100% /

So we were using 5.0mb even after I killed luaupnp, network monitor, curlqueue, etc.. I killed the ssh pid 1056, and that freed up 1.4 MB. Unfortunately my next thing was to stupidly kill the other ssh session, which was my remote access. Duh--stupid... So I'll never know what was using the rest. But, somehow ssh was using a bunch of disk space in a phantom file. In our code that would happen when we were logging to a file, rotated and rm'd the file, but the process didn't close/reopen it. We've fixed this since we now do a killall of our luaupnp,etc., so they close/reopen the handle after you move the file. Is it possible that when you start ssh you're logging some output to a file, like with a > or >>? And maybe the file is rm'd in rotate logs? Even if that accounts for 1.4M of space, there's still 3.6M used by one of the other processes in that list. So maybe dropbear or ntpclient or lighttpd log to a file that we delete? This would account for system instability since the system stops working when /tmp gets full.
TagsNo tags attached.
Attached Files

- Relationships

-  Notes
There are no notes attached to this issue.

- Issue History
Date Modified Username Field Change
1969-12-31 16:33 micasaverde New Issue
1969-12-31 16:33 micasaverde Status new => assigned
1969-12-31 16:33 micasaverde Assigned To => it_team
1969-12-31 16:33 micasaverde Description Updated
1969-12-31 16:33 micasaverde Status assigned => resolved
1969-12-31 16:33 micasaverde Resolution open => fixed


Copyright © 2000 - 2017 MantisBT Team
Powered by Mantis Bugtracker