[PROD-2] University of Michigan deployment data Created: 24-Apr-2007  Updated: 07-Apr-2016

Status: RESOLVED
Project: Sakai Production/Pilot Deployments

Type: Production/Pilot data
Reporter: Anthony Whyte Assignee: Beth Kirschner
Labels: institution

Attachments: PDF File CTools hardware diagram-2.pdf     Microsoft Word LONG_to_CLOB_Conversion_in_Oracle.doc     Microsoft Word Managing_SESSION_and EVENT_Archives.doc    
Region: North America
Country code: US
Organization: University of Michigan
Local name: CTOOLS
Site URL: http://ctools.umich.edu
Sakai version: Sakai 2.9.0
Status: production
Scope: enterprise
System(s) replaced:
Home-grown
Total orgs: 2
Total active users: 45,000
Total active sites: 15,000
Production start date:
Contact(s):
Beth Kirschner
Other information: University of Michigan - Dearborn is also using CTOOLs, with some 500 class sites representing 800 or so courses for Fall 09 term.
OS:
Red Hat Enterprise Linux (RHEL)
Web server: Apache HTTP Server and Tomcat
Db: Oracle 11g
Db version: 11.2.0.2
JVM: Java 1.6.x
Email enabled: Yes
64-bit processing: Yes
Application server(s): 16 X VMware Virtual Machines, 2 vCPUs, 24 GB RAM. 11 servers are in active cluster, 12th server is out of active cluster running as search server, 4 servers configured and running but out of cluster as hot contingency servers. All servers running apache 2.2.x fronting tomcat 7.0.x and jvm 1.6.x. One jvm configured per server, with 12GB heap. All servers running 64-bit RHEL 6.x.
Total app servers: 16
Db server(s): Updated DB hardware specs TBA (physical hardware, CPUs, RAM). RHEL 6.x OS, Oracle 11g.
Total db servers: 1
File storage: Planning migration to NetApp FAS6240 filer served via NFS. (Jan/Feb 2013). Currently on NetApp gateways backed by SVC storage.
Cluster/load balancing: 4 X Cisco ACE Load Balancers. 2 per data center in HA configuration. Anycast configuration routing traffic to primary LB in each data center.
Hardware: other information: All above physical hardware is replicated in a second data center. Oracle standby database server configured at second site for Oracle replication. Sakai file resources to be synchronized to secondary filer via SnapMirror (current configuration implemented with metromirror).

Currently radmind (http://www.radmind.org) is provisioning system in use.
Administration:
Account, Admin's Preference tool, Archive tool, Config Viewer, Job Scheduler, Membership, Memory/cache tool, Message of the Day (MOTD), On-line, New Account, Realm Editor, Site Editor, Site Info, Site Management, User Preferences, Worksite Setup
Collaboration (general):
Announcements, Email Archive, News, Presence, Schedule, Search, Web content, Wiki
Assessment, Evaluation, Poll and Survey:
Evaluation System, Polls, Test Center (Mneme)
Communications (Asynchronous):
Discussion, Forums, Mailtool, Messages, Podcasts
Communications (Synchronous):
Chat
Resource Management:
Resources, Resources: Citation Helper (Sakaibrary)
Synoptic:
Recent Announcements, Recent Chat Messages, Recent Discussion Items
Teaching and Learning:
Assignments, Dropbox, Gradebook, Goal Management, Melete Lesson Builder (Contrib), OpenCourseWare (OCW) tool, Portfolios (OSP), Post'em, Syllabus
Web Services:
LinkTool, Other
Network authentication: Kerberos
Student information system: Home-grown
Integration: other information: UMIAC = home-grown data warehouse for Peoplesoft student data; Cosign/Kerberos, see http://www.weblogin.org
Project Manager: .75
System Administrator: > 2.0
Database Administrator (DBA): .50
Developer: 3.0
UI/UE Designer: 1.0
Quality Assurance (QA): 1.50
Technical Writer/Documenter: .25
End-user Support: 3.0
Instructional Designer: 1.0
Trainer: .50
Latitude: 42.29096106747343
Longitude: -83.71745109558105

 Comments   
Comment by John Leasia (Inactive) [ 02-May-2007 ]
Hardware setup
Comment by John Leasia (Inactive) [ 14-Jun-2007 ]
Information about converting from Long to Clob in Oracle database.
Comment by John Leasia (Inactive) [ 15-Jun-2007 ]
Sakai hardware setup at University of Michigan
Comment by John Leasia (Inactive) [ 12-Jul-2007 ]
Info about how UM manages the event tables.
Comment by Anthony Whyte [ 15-Jul-2007 ]
Per J. Leasia:

"The University of Michigan moved to 2.4.x on Saturday, July 14. We had an extended outage of about 10 hours as we performed an Oracle update, ran 2.3 to 2.4 conversions and the old to new chat conversion. All went well.

We have in production the usual tools, plus Citations, Messages, Forums.
We have as stealth tools ePortfolio, Postem, iTunes U, Podcasts, Search, Goal Management, Test & Quiz, Test Center,, Polls, PageOrder, SummaryCalendar
Our build is based on 2.4.x as of a couple weeks ago. We have added the assignment tag pre_2-5-0_QA_001, and the post-2.4.x gmt branch.

We don't have too much customization other than via sakai.properties - gradebook gradescales defined in a custom component.xml, a few custom tools, and we will have to cobble something quick together to get back the 2ndary ID (ID column in site info) as that went missing from the new CM, to be added to the 2.5 user directory provider I believe.

We expect to roll several more bug fix releases to production before the start of our Fall term to fix several things we expect will be particularly annoying to our campus."
Comment by Jeff Cousineau (Inactive) [ 15-May-2009 ]
The above configuration is current as of 15May2009.

Sun Fire T2000 database servers will be replaced with Sun T5120 servers before start of Fall semester 2009 (before end of August 2009). New hardware has 64GB RAM and expandable to 128GB.

A new SATA disk shelf (14TB raw capacity) will be added to both primary and secondary NetApp FAS3020 filers before start of Fall 2009 to increase storage capacity. ONTAP software upgrade expected to be deployed in order to implement "de-duplication".
Comment by John Leasia (Inactive) [ 20-Jan-2010 ]
As of 1/20/2010, we're running
 Sakai [2.6.1]
 Tomcat [5.5.27]
 Apache [2.2] + mod_jk
 Java [1.5]
 Oracle [10g]
 Solaris [10]
Comment by Beth Kirschner [ 31-Jan-2011 ]
The production database server we use is a T5120 Solaris 8 cores 32 thread machine, with 128G memory. 64G memory should be sufficient.
We configure 32G SGA in load test machine and 64G on production along with 6G PGA to handle 5 application servers with 50 connections from each app server. We use primary and standby database through Data Guard for the high availability.
Generated at Wed Oct 16 22:25:27 CDT 2019 using Jira 8.0.3#800011-sha1:073e8b433c2c0e389c609c14a045ffa7abaca10d.