Profile Manager. It ain’t so bad…

Apple’s Profile Manager has a reputation for having poor performance, being unresponsive and unable to scale well beyond a few hundred clients. If you run any sort of large Apple device deployment you will inevitably be guided to one of many other MDM solutions and told that Profile Manager is more of a “reference implementation”.

That said, all MDM solutions are working from the same set of APIs which limit their feature set. If you want to know what features a MDM solution can implement you can read Apple’s Mobile Device Management Protocol Reference. It sets out what features are supported by the various Apple platforms and OS versions. A MDM solution can’t add new features arbitrarily, simply because the client OS won’t understand the feature. If Profile Manager supports the features you intend to utilize in the MDM protocol there is no reason you shouldn’t be able to use it in your environment. Especially since it is one of the most cost effective solutions available.

But performance…

It is true that the default configuration of Profile Manager has scalability issues. A Profile Manager server that hasn’t been tuned for production will frequently become unresponsive and stop pushing profiles. It is important to understand the technologies that are used to build it, primarily Ruby on Rails and PostgreSQL. The performance culprit in this instance is mainly the PostgreSQL database that is being used to store information on all of the clients and configurations that you setup in Profile Manager. Profile Manager is shipped with a mostly default configuration of PostgreSQL which needs to be tuned for a production environment with a high client count.

Disclaimer:

Make sure you have a backup of your Profile Manager server before making any of these changes. Anytime you make a change to a production system mistakes can happen and it is always a good idea to have an exit strategy for things gone wrong. YMMV, these values are specific to the hardware configuration of your server and the number of clients you have in Profile Manager.

Tuning Profile Manager:

  • The more clients you plan to serve with Profile Manager the more RAM you should have. Currently, I have a PM server with ~1500 clients and it has 16GB of RAM. Queries are handled sub second. RAM utilization is ~75-80%. If more clients are added to the PM server there will also need to be a memory upgrade.
  • There are several settings in the PostgreSQL config that you will want to look at while tuning for performance. Many of these settings are based on how much RAM you have in your PM server. While there are over a hundred settings you can change while tuning PostgreSQL we are going to focus on just five.
    • shared_buffers (default value: 256MB) – Sets the amount of shared memory that can be used by all PostgreSQL processes and works primarily as a disk cache. For more information you can read the PostgreSQL documentation.
    • max_connections (default value: 200) – Sets the maximum number of connections that will be allowed to the server. Any requests over this amount will be denied. For more information you can read the PostgreSQL documentation.
    • work_mem (default value: 1MB) – Sets the amount of memory a work_mem buffer can allocate. If the work_mem value it too small a temporary file will be used causing PostgreSQL to access storage to complete the query. For more information you can read the PostgreSQL documentation.
    • maintenance_work_mem (default value: 16MB) – Sets the amount of memory maintenance operations can use. For more information you can read the PostgreSQL documentation.
    • checkpoint_segments (default value: 10) – Sets the maximum number of log segments between automatic WAL checkpoints. Increasing the number can increase the time needed for crash recovery. For more information you can read the PostgreSQL documentation.

There are several online calculators that will help you determine the correct values for your server configuration, here are a couple.

  • PGTune – Use the DB Type “Online transaction processing systems”
  • PGConfig – Use the Application Profile “ERP or long transaction applications”

Example configuration:

Path to server config file: /Library/Server/ProfileManager/Config/PostgreSQL_config.plist

The default settings that were changed in this config:

  • shared_buffers to 4096MB
  • checkpoint_segments to 32
  • added line for work_mem and set to 82MB
  • added line for maintenance_work_mem and set to 1024MB
  • max_connections was not changed in this example, if you are getting denied connections in your log you may need to increase this value
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>ProgramArguments</key>
	<array>
		<string>-D</string>
		<string>/Library/Server/ProfileManager/Config/ServiceData/Data/PostgreSQL</string>
		<string>-c</string>
		<string>unix_socket_directories=/Library/Server/ProfileManager/Config/var/PostgreSQL</string>
		<string>-c</string>
		<string>default_transaction_isolation=serializable</string>
		<string>-c</string>
		<string>logging_collector=on</string>
		<string>-c</string>
		<string>log_rotation_size=10MB</string>
		<string>-c</string>
		<string>log_connections=off</string>
		<string>-c</string>
		<string>log_lock_waits=on</string>
		<string>-c</string>
		<string>log_statement=none</string>
		<string>-c</string>
		<string>log_min_duration_statement=1000</string>
		<string>-c</string>
		<string>log_line_prefix=[%p] [%m] (%u-%x) </string>
		<string>-c</string>
		<string>listen_addresses=</string>
		<string>-c</string>
	    <string>log_directory=/Library/Logs/ProfileManager</string>
		<string>-c</string>
		<string>log_filename=PostgreSQL-%F.log</string>
		<string>-c</string>
		<string>log_file_mode=0640</string>
		<string>-c</string>
		<string>log_min_messages=WARNING</string>
		<string>-c</string>
		<string>log_min_error_statement=WARNING</string>
		<string>-c</string>
		<string>unix_socket_group=_devicemgr</string>
		<string>-c</string>
		<string>unix_socket_permissions=0770</string>
		<string>-c</string>
		<string>max_connections=200</string>
		<string>-c</string>
		<string>shared_buffers=4096MB</string>
		<string>-c</string>
		<string>max_locks_per_transaction=128</string>
		<string>-c</string>
		<string>max_pred_locks_per_transaction=128</string>
		<string>-c</string>
		<string>checkpoint_segments=32</string>
		<string>-c</string>
        <string>work_mem=82MB</string>
		<string>-c</string>
        <string>maintenance_work_mem=1024MB</string>
	</array>
</dict>
</plist>

Final notes:

The total number of clients able to be supported by a PM server appears to be directly related to the amount of RAM in the system. Other performance metrics of the system (CPU utilization, HD IOPS, network traffic) are all low. CPU single thread performance can be important for PostgreSQL. This is true if a large number of clients query the server at the same time. For instance while enrolling a few hundred new clients at the same time PM may appear to be unresponsive but it should recover after several minutes. On the server you will see a PostgreSQL instance utilizing 100% CPU during this time. PM appears to sacrifice UI queries over handling client requests, which is a reasonable trade-off. Look at the PM log files located at /Library/Logs/ProfileManager/, especially the PostgreSQL log and the dmpgHelper log for information on how your PM server is performing. At this time I have been unable to find an upper limit to the number of clients able to be supported by a single server.

Additional references:

Update:

Apple has fixed this issue, if you are running the latest version Profile Manager you shouldn’t need to tune PostgreSQL. That said for very large sites there may be some changes you can look into.

Downloading a file from PHP (or anything else) directly from a jQuery POST

Problem:

You have a file that is dynamically generated by some server side code that is triggered by a POST request but you just want the file to download and not force the user to leave the page that they are on.  Because that is how it “should” work.  Typically though you insert some form into the page and they are directed off of the page to download the file.

Solution:

By leveraging jQuery, we can perform several actions.  When the “export” button (in this example a div with the id “export search”) is clicked the following jQuery code is run. The first thing we need to do is dynamically append a form to the page where the button was clicked.  In our form we need to have the page where the file is going to be downloaded from and a hidden input type with our POST variable and value.  Once the form has been appended to the page, we want to submit it and them remove the form since we no longer need it.  After the form is submitted the file will automatically start downloading.

$(document).on('click',"#export_search", function() {
 var search_val=$("#search_input").val();
 $('<form action="./php/export.php" method="post"><input type="hidden" name="search_term" value='+search_val+'></form>').appendTo('body').submit().remove();

});

How to: Split large packet captures with tcpdump

Problem:

Lets say that you have captured some traffic with tcpdump, wireshark etc and resulting file is much larger than you anticipated and you can’t analyze the capture until the original file is broken into much smaller segments.

Solution:

This is where tcpdump will come in handy.  The following command will read in your original large file and split it up into evenly sized segments of your choosing.

tcpdump -r <path_to_large_pcap> -C <size_in_MB_that_you_want_the_file> -w <path_to_where_you_want_the_files_saved>

So for instance the following command will break up the file “network.pcap” into multiple 100MB files called “output1”, “output2” and so on.

tcpdump -r ./network.pcap -C 100 -w ./output