Add a New Comment (Rich Markup)
( Jump to the original submission )
"Actually, the only problem all these authentications cause me is that they effectively spam the log on the cloud server." If the logging done by the server is the problem than change the logging configuration. (If the server uses BASIC-authentication it should be able to handle this properly.) "So, having my development environment hosted on the NAS in the cloud server proves to be very convenient to me. When working remotely there is no trouble. I only have any trouble when working locally." And you are not doing something differently when working remotely? Werner
Actually, the only problem all these authentications cause me is that they effectively spam the log on the cloud server. I discovered the problem when I was rooting around on the NAS and noted that the cloud server's log had reached 22GB. This traffic also slows down array scrubbing because the NAS is constantly writing the log - which takes it away from scrubbing. This extends the time to scrub from about one day to anywhere from 3 to 5 days (40TB RAID6 array). I configured this way, with the dev environment on the NAS, about 9 months ago. Performance has been satisfactory in all regards, except for this and a problem with symlinks not working (as you say, webdav is not great for a filesystem but I am just figuring this out). Perhaps another reconfigure is in my future. But I think I will wait until I return from Ukraine.
I do that because the cloud server package pretty much requires it. There is also an NFS share and and iscsi share between this NAS and this workstation, for particular purposes. But if I use any other protocol to access the data maintained by the cloud server, it screws up the cloud server's database and causes me all sorts of trouble. So, I have to access though its interface, and that is webdav. Basically, while I work primarily out of one location, I can and do work from pretty much anywhere in the world. At least a couple times a week, I work remotely from my client's office. In a couple of weeks, I will be in Ukraine for awhile, and I will be doing work from there. So, having my development environment hosted on the NAS in the cloud server proves to be very convenient to me. When working remotely there is no trouble. I only have any trouble when working locally. I suppose I could reconfigure to remove the webdav share and put a cloud client on the workstation. This would leave me with a local copy of the dev environment on my workstation, and copies would be pushed to the cloud server every time I save. Of course, this is how both my laptop and my android device are configured. This would make github and gitlab easier to use from the workstation too. If your answer is that there is no good solution because of the limitations of webdav, then I suppose this is what I must do.
"The workstation and the NAS that hosts the cloud server are located on the same LAN." Many people seem to do this, but I still can't understand it: Why do you use WebDAV as a network file system in the LAN when there are proper network file systems like NFS and SMB? (WebDAV is not very well suited for a file system.) Werner
You can't see from the commandline which authentication method is used. It is defined by the server and you have to look there if you want to know. Alternatively you may use debugging tools (e.g. davfs2's debug option) on the client side. Most probably your server uses HTTP-BASIC-Authentication. Because HTTP is stateless the credentials must be send with every request. That is how it is designed. If you think this causes unecessary trafic and computation: no. Compared with TLS, XML used by WebDAV and probably your data this is nothing. No reason to care about that one additional header. "The application that is causing the requests to be made is Kate." Are you sure? None of the many daemons running in the background? Assuming it is Kate: You want to use Kate. Kate does a lot of file system action. Because the files are on a remote server, a lot of file system action causes a lot of HTTP-requests and traffic. So what happens is what you want to happen. If you don't like it then use other tools. Werner
This is the line used to mount the davfs share: sudo mount -t davfs2 https://192.168.2.27:12080/nextcloud/remote.php/webdav/ /home/jiml/owncloud -o rw,uid=jiml,gid=jiml So, yes, it is using https and presumably one of those authentication protocols. Is there another more persistent way? The workstation and the NAS that hosts the cloud server are located on the same LAN. The application that is causing the requests to be made is Kate. Kate is a GUI based editor for KDE. I use it almost exclusively in my development work, and I find that having my development directory on my cloud server is very convenient given that I may be doing work from pretty much anywhere. At home I use my workstation, at all other locations I use my laptop. It is the workstation that mounts the webdav share.
What authentication-method do you use? If you use one of the HTTP-authenication-methods (BASIC or DIGEST) then the client must send the credentials with every request because HTTP is a stateless protocol. If davfs2 sends several requests per second to the server, it does this because some application does some file system action. In many cases, especially with graphical user interfaces; this is quite useless activity, like checking for changes to update the unser interface, fetching preview pictures, ... You need to find the culprit and stop it from doing useless file system activities. You can set option "debug kernel" and then check your logfiles for messages from mount.davfs. For some entries that start with FUSE_XXX a pid will be shown. That is the PID of the process that issued the request. "ps ax" will show you the program name for that process. Werner
Server is Nextcloud 12 hosted on NAS4Free (FreeBSD system). Client is Mageia 6 Linux. A webdav share is mounted in the user's home directory providing access to the files in the Nextcloud instance. User has Kate open, with multiple files from the webdav share open in the editor. Nextcloud shows the client authenticating with the server several times a second. How can I stop this and have the share remain connected? Server does not support locks. Contents of ~/.davfs2/davfs2.conf # Available options and default values # ==================================== # General Options # --------------- # dav_user davfs2 # system wide config file only # dav_group davfs2 # system wide config file only # kernel_fs fuse # buf_size 16 # KiByte # WebDAV Related Options # ---------------------- # use_proxy 1 # system wide config file only # proxy # system wide config file only # servercert # clientcert # secrets ~/.davfs2/secrets # user config file only # ask_auth 1 use_locks 0 # lock_owner <user-name> # lock_timeout 1800 # seconds # lock_refresh 60 # seconds # use_expect100 0 # if_match_bug 0 # drop_weak_etags 0 # allow_cookie 0 # precheck 1 # ignore_dav_header 0 # server_charset # connect_timeout 10 # seconds # read_timeout 30 # seconds # retry 30 # seconds # max_retry 300 # seconds # add_header # Cache Related Options # --------------------- # backup_dir lost+found # cache_dir /var/cache/davfs2 # system wide cache # ~/.davfs2/cache # per user cache cache_size 850 # MiByte # table_size 1024 # dir_refresh 60 # seconds # file_refresh 1 # second # delay_upload 10 gui_optimize 1 # Debugging Options # ----------------- # debug # possible values: config, kernel, cache, http, xml, # httpauth, locks, ssl, httpbody, secrets, most Contents of /etc/davfs2/davfs2.conf # Available options and default values # ==================================== # General Options # --------------- # dav_user davfs2 # system wide config file only # dav_group davfs2 # system wide config file only # kernel_fs fuse # buf_size 16 # KiByte # WebDAV Related Options # ---------------------- # use_proxy 1 # system wide config file only # proxy # system wide config file only # trust_ca_cert # servercert # deprecated: use trust_ca_cert # trust_server_cert # clientcert # secrets ~/.davfs2/secrets # user config file only # ask_auth 1 # use_locks 1 # lock_owner <user-name> # lock_timeout 1800 # seconds # lock_refresh 60 # seconds # use_expect100 0 # if_match_bug 0 # drop_weak_etags 0 # n_cookies 0 # precheck 1 # ignore_dav_header 0 # use_compression 0 # min_propset 0 # follow_redirect 0 # server_charset # connect_timeout 10 # seconds # read_timeout 30 # seconds # retry 30 # seconds # max_retry 300 # seconds # add_header # Cache Related Options # --------------------- # backup_dir lost+found # cache_dir /var/cache/davfs2 # system wide cache # ~/.davfs2/cache # per user cache cache_size 950 # MiByte # table_size 1024 # dir_refresh 60 # seconds # file_refresh 1 # second # delay_upload 10 # gui_optimize 0 # minimize_mem 0 # Debugging Options # ----------------- # debug # possible values: config, kernel, cache, http, xml, # httpauth, locks, ssl, httpbody, secrets, most
(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)
Attach Files: Comment:
No files currently attached
Depends on the following items: None found
Items that depend on this one: None found
There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.
Only logged-in users can vote.
Please enter the title of George Orwell's famous dystopian book (it's a date):
Follow 4 latest changes.
Copyright © 2023 Free Software Foundation, Inc. Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved. The Levitating, Meditating, Flute-playing Gnu logo is a GNU GPL'ed image provided by the Nevrax Design Team. Source Code
Powered by Savane 3.11