Docker 0.258.3: Large File Uploads Fail

We have been testing file uploads to a server which has a requirement for large (~2 GB) attachments. In the process it was discovered that files greater than ~500 MB in size fail to upload from remote clients even though locally uploaded files are received without error for sizes up to ~1 GB (the largest tested so far).

The server is located behind an Nginx reverse proxy which has been configured to permit large files and long transfer times in case the remote client is on a slow or problematic connection.

As mentioned above, the uploads do succeed if they originate on the server or on the local area network of the server, and they do also arrive from a remote client if the file size is less than ~600 MB (test files increment in units of 100 MB).

Variable NC_ATTACHMENT_FIELD_SIZE has been set to ~2 GB for testing and appears to be working judging from its default value (20 MiB) and the size of the test files.

When a failure does occur the upload window vanishes and an error is reported:

“Request failed with status code 408”

Screenshot 2024-11-29 at 10.38.05 AM

No errors are reported in the Nginx logs.

What is code 408?

Is there anything we need to do on the server side to further configure it for large file uploads?

-Thank you

Node: v20.15.1
Arch: x64
Platform: linux
Docker: true
RootDB: pg
PackageVersion: 0.258.3

The HTTP 408 Request Timeout error indicates that a request to a website server took longer than the server was prepared to wait. This means that the connection with the website has “timed out”.

See if there is something that you can fiddle on the nginix.

Please provide for more logs on the server side.

Where does NocoDB keep its logs?

We fiddled with Nginx some more and (finally) arrived at a configuration that works as expected. Uploads up to 2 GB in size are received successfully.

If anyone needs details on the Nginx configuration, please PM me here and I will provide an anonymized template.

-Thank you

hey @HG2S, it would help immensely if you can share the snippet here in the reply. That way, any community member landing on this post can benefit from it.

Hello @navi ,

The following Nginx configuration should help NocoDB users integrate the database with an Nginx reverse proxy.

• Please note that in this case the proxy is co-located with the NocoDB server.
• The design is meant to support remote clients with big files on so-so connections.
• Users should customize settings such as timers, buffers and domain names as needed.

# Configure SSL/TLS
ssl_certificate /etc/letsencrypt/live/my.nocodb.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/my.nocodb.com/privkey.pem;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;

# Configure timeouts
client_max_body_size 3G;
client_body_timeout 300s;
keepalive_timeout 65;

# Optimize buffers
client_body_buffer_size 10M;
large_client_header_buffers 4 8k;

# Configure server block
server {
    listen 443 ssl http2 backlog=4096;
    server_name my.nocodb.com;
    
    # Enable gzip compression
    gzip on;
    gzip_min_length 1100;
    gzip_buffers 4 32k;
    gzip_types text/plain application/x-javascript text/xml text/css image/svg+xml application/json;
    gzip_vary on;
    gzip_comp_level 6;
    gzip_proxied any;
    
    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff"; 
    # add_header Content-Security-Policy "default-src 'self'"; # NB: Appears to conflict with Google (Ad?)
    add_header X-XSS-Protection "1; mode=block";
    add_header Referrer-Policy "strict-origin";
    add_header Permissions-Policy "interest-cohort=()" always;
    
    # Additional security headers for Google. Still needs work. 
    # add_header Content-Security-Policy "script-src 'self' https://www.google.com https://pagead2.googlesyndication.com;";
    add_header Content-Security-Policy "style-src 'self' 'unsafe-inline';";
    add_header Content-Security-Policy "img-src 'self' https://www.google.com https://pagead2.googlesyndication.com;";
    add_header Content-Security-Policy "frame-src 'self' https://www.google.com https://pagead2.googlesyndication.com;";
    add_header Content-Security-Policy "connect-src 'self' https://googleads.g.doubleclick.net https://pagead2.googlesyndication.com;";

    # Hide Nginx version
    server_tokens off;

    # Configure caching
    expires 30d;
    add_header Cache-Control "public, max-age=2592000";
    
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Increase timeout for large file uploads
        proxy_connect_timeout 600;
        proxy_send_timeout 600;
        proxy_read_timeout 3600;
        sendfile off;
        tcp_nopush off;
        tcp_nodelay off;
        
        # Enable keepalive connections
        keepalive_timeout 65;
        keepalive_requests 100;
        
        # Limit access to sensitive directories
        location ~* /(private|config|logs) {
            deny all;
            return 403;
        }       
    }
}

# Include any additional configuration files
include /etc/nginx/conf.d/*.conf;

The configuration was evaluated by Security Headers and Mozilla HTTP Observatory and received grades of A+ and B, respectively. That said, further work needs to be done to refine the Content Security Policy (CSP).

Please post questions, comments, improvements and alternative approaches. We are new-comers to NocoDB and are still going up the learning curve.

We hope this helps.

Many thanks.