Help the documentation team understand which areas of ASP.NET and Tooling need better help topics

Small databases

I'd like to understand the best practice for small databases. For example, say at Contoso University we know there are only going to be a few hundred or a few thousand students and courses. So all the data would comfortably fit in memory. So then is it better to use an in-memory collection and avoid potentially high-latency database operations? If so, how to periodically write the data to storage?

I am thinking of small-scale production web sites deployed to Windows Azure. Thanks.

12 votes
Vote
Sign in
Check!
(thinking…)
Reset
or sign in with
  • facebook
  • google
    Password icon
    I agree to the terms of service
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    Anonymous shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

    Size is only one consideration. NoSQL often has an advantage over RDBs with large unstructured data sets. If you need relational constraints, transactions and other features not supplied by NoSQL products, then you need a relational DB.
    I’m working on a sample of my MovieDB (intro to MVC 5) that uses Azure caching and SQL Server. We are working other samples that use NoSQL, Azure Table Storage, Blobs, etc

    3 comments

    Sign in
    Check!
    (thinking…)
    Reset
    or sign in with
    • facebook
    • google
      Password icon
      I agree to the terms of service
      Signed in as (Sign out)
      Submitting...
      • David LeanDavid Lean commented  ·   ·  Flag as inappropriate

        Most relational DB's read the data from disk once, then store it in memory as long as they can. So if you have more memory than data, it becomes an in-memory database.
        When you update a record, it writes to the transaction log & to the memory copy. The dirty in-memory page does not get flushed to disk till much later. So the I/O is effectively buffered.

        DB's give you the benefits of :-
        Multiple Web servers sharing to the same data source.
        Generic management tools to backup, view & manage the data.
        Internal structures that can often detect & correct data errors.

        IIS recycles frequently, flushing its buffers/cached data each time. Often resulting in a slower "re-read of the entire cache from a .CSV file or Azure Table" compared to re-reading from SQL's Memory.

        In short. The size of the data is not the only factor, also consider the operational aspects of the system you are creating.

      • Anonymous commented  ·   ·  Flag as inappropriate

        The particular scenario I am thinking of has a few thousand records that are read-only, although users can create their own items too.

        Think of a collection of movies, albums or song lyrics that has been assembled from a list of a few thousand popular titles. The user can browse the collection (read-only), and most of the time they find what they are looking for there. However the user can also add their own records.

        Since the data fits in memory, and most of it is read-only, is it maybe better not to use a database for the popular titles.

      Feedback and Knowledge Base