Sakai
  1. Sakai
  2. SAK-9718

Quota Calculations cause all resources in a site to be loaded into memory, killing any put performance

    Details

    • Previous Issue Keys:

      Description

      When a resourceCommitEdit() is called, the quota calulation loads all the resources in a site into memory to calculate the quota.

      This is Ok for small sites with 5 - 10 resources, but with larger sites where 100's of files have been uploaded it causes massive garbage collection and kills performance. It is particually bad with a webdav access where every put however big causes 100 or more getMembers() calls against evey collection in the site. (once per collection, so not cachable)

      The Quota calculation should be maintained in 1 place only so it doent have to be re-calculated every time.

      It might be worth a look at other filesystems with quota to see how its done there.

        Issue Links

          Activity

          Hide
          Megan May added a comment -
          2.4.0.014 bound
          Show
          Megan May added a comment - 2.4.0.014 bound
          Hide
          Megan May added a comment -
          TESTING GUIDANCE
          =====================================
          For a single node there is a very simple test,
          get 4 - 5 web dav sessions uploading to the same site at the same time, you might multiply that for a few sites.
          Then do the same but lower the quota to make it go over quota.\
          -----------
          For a cluster, you need to repeat but with the sessions split between nodes.



           (Preliminary testing of fix) I have done this for both clustered and non clustered for a 400MB data set of files ranging from 10K to 5M. - Ian
          Show
          Megan May added a comment - TESTING GUIDANCE ===================================== For a single node there is a very simple test, get 4 - 5 web dav sessions uploading to the same site at the same time, you might multiply that for a few sites. Then do the same but lower the quota to make it go over quota.\ ----------- For a cluster, you need to repeat but with the sessions split between nodes.  (Preliminary testing of fix) I have done this for both clustered and non clustered for a 400MB data set of files ranging from 10K to 5M. - Ian
          Hide
          Andrew Poland added a comment -
          merged to 2-4-x r29918
          Show
          Andrew Poland added a comment - merged to 2-4-x r29918
          Hide
          Megan May added a comment -
          updating fix version to include 2.4.x
          Show
          Megan May added a comment - updating fix version to include 2.4.x
          Hide
          Peter A. Knoop added a comment -
          Trunk missing as fix version even though it was checked-in, so adding.
          Show
          Peter A. Knoop added a comment - Trunk missing as fix version even though it was checked-in, so adding.

            People

            • Assignee:
              Unassigned
              Reporter:
              Ian Boston
            • Votes:
              2 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: