Mutation, Continued
1 So what is Mutation?
Clearly, as programmers we need to think about whether or not to use mutation. We have examples and warnings here, but not principles. We should try to articulate the impact of mutation on computation and some guidelines on how to use it effectively.
Philosophically, we can state the following principle:
Mutation introduces time into computation
To help explain this, consider the following sequence of operations:
LinkedList<Integer> MyList = new LinkedList<Integer>(); |
MyList.contains(3) --> returns false |
MyList.add(3); |
MyList.contains(3) --> returns true |
Mathematically, this statement is somewhat bizarre: a function, by definition, returns the same output every time it is called on the same input. How do we make sense of this?
We understand this by realizing that there is always an implicit argument to every function: the contents of memory. The contains method is still a function (in the mathematical sense), but if we are being honest about it, its type is
contains : LinkedList<E> E (VarNames->Addresses) -> boolean |
Mutation forces you to consider an implicit parameter.
But wait–isn’t memory also an implicit parameter in programs without mutation? Yes, but the whole point of mutation is that it can change the contents of that implicit parameter (beyond adding new names). When you program without mutation, the address tied to variable name remains the same, so the implicit argument is irrelevant. When you program with mutation, you have to think about the contents of that implicit parameter.
But so what? Why is this an issue? Because people tend to forget to think about implicit data. Remember our backtracking tic-tac-toe example? People often forget to "undo" the board edit. With memory not being explicit in the program, that’s easy to do.
2 When To Use Mutation
Mutation is not uniformly evil. We’ve seen that we cannot create cyclic data (graphs) without it, for example. Many programs also require that you use mutation to maintain information. To understand these conditions, consider our observation that mutation introduces time into computation (via past events). There are many computations when you do want the result to be dependent on past events. In a banking system, for example, the history of past transactions determines a customer’s current balance. This suggests that a banking system should use mutation to reflect transactions over time.
But hang on: when we discussed tic-tac-toe yesterday, we said that we should not have used mutation there. We should have just passed the current board around as an argument. Didn’t that "current board" capture past choices of moves, which would justify state?
The current board does indeed capture past move choices. However, only the search function can modify or consult those choices. In a banking application, different functions add to or consult the history of transactions. Mutation helps coordinate the actions between those different functions. Put differently, mutation helps manage the combination of remembering (the effect of) past events and sharing that knowledge across multiple functions.
3 A Practical Perspective
A friend of mine is on the Google Chrome team. These are his comments on the role of functional programming (versus mutation-based programming) at Google.
What should be type of a search method be? Assume we want (roughly) the same results each time we search on the same terms.
String -> results |
What’s the type of a typical command in a system for searching for flights (for example)?
flight-constraints -> void |
Both of these operations search massive amounts of data. We therefore want to distribute the computation into smaller chucks that different machines can run in parallel. Which of these two type contracts is easier to parallelize?
Functional code is easy to parallelize; mutation-based code is not.
The average PC stays up roughly 2 months without crashing. If you build your infrastructure on thousands of cheap machines, you’ll have one failing every few minutes. You must be able to move computations to new machines when a machine goes down.
Testing is much harder with shared state or lots of local state.
How do you schedule updates to highly-shared data structures when machines might die mid-computation? Functions have to return effects (ie, operations to perform on data). Now, highly-parallel algorithms such as map-reduce can work with the results.
3.1 Reflection
Think about this proposal that a function return effects rather than modify data: what it really does is make an implicit parameter (memory) explicit. When we pass a tic-tac-toe board as an argument rather than modify it, we are making the board data explicit. Making data updates explicit turns methods back into functions. This is a powerful and useful technique that simplifies many real-world concerns such as those Google faces.
4 Back to Graph Representations
Buried in our lecture notes on graphs is a proposal that we didn’t have time to cover in lecture: rather than accumulate the visited nodes in a list, we could have a "visited" flag within each node that we set when a node is visited.
What are the implications and tradeoffs of this representation versus our representation that passed around a list of visited nodes?
You have to remember to tear down after each operation.
Modifying the nodes requires less memory than passing around a list.
Modifying the nodes assumes that you have access to them (if you are writing a mashup over the Google-Maps API, you don’t get to modify the map graph).
Modifying the nodes assumes that only one graph traversal will be running at a time.
Note that none of this discussion was about whether we used a mutation-based data structure (ie, a LinkedList) to store the visited nodes. That list is a local data structure; you can handle that however you want. The question is whether we modify the overall data structure or use a parameter to store our relevant history. There are cases in which each approach is arguably preferable.
5 Summary
The message from all of this is that you should think carefully about not just your data structures, but the properties of their implementations, when writing larger-scale software. We’re not saying "never use a mutation-based data structure" – that would be silly, as it would imply that you never use the built-in Java libraries, for example.
We are, however, saying that you need to think about mutation across the interfaces of your code. Do you want clients of your code to have a stateful view of your code (regardless of whether you use state internally)? What should your code return (void or effects)? You have control over your APIs and the access modifiers (including immutable, which prevents clients from modifying a piece of data) that you put on your objects.