source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
1,361
Using What to do with [bugs] questions now that version 9 is released? as a guideline Questions about bugs that have been fixed should have individual tags for each affected version. However this is impossible for longstanding bugs where there are many affected versions as a Question may only have five tags total, including bugs . Although we might arbitrarily choose not to use tags for versions past a certain age this diminishes information and requires future maintenance. We might arbitrarily tag for only the last 2 affected versions but I think this will be confusing; for example under that scheme these examples: bugs front-end graphics version-7 version-8 bugs front-end graphics version-8 version-9 would mean that the first bug was fixed in version 9 and the second was fixed in version 10, however both might be present in version 7. (Or earlier.) How can we best address this?
Tags should be used for categorization, not for giving additional information. There are bugs which are present in all versions from 8-10, which already takes up 3 tags out of the 5 maximum. One more is taken up by bugs , leaving only one more. Tagging for each version will force us to remove more useful tags from the question, which is bad. Not tagging for all versions that are affected defeats the main purpose of tags: categorization. It won't be possible to search for questions with bugs and version-9 and get all relevant bugs. At this point, the only remaining purpose of the tag is communicating information, which I think is misguided. I propose not using tags for indicating the version any more, and keeping only bugs . Instead of using tags, I propose adding a line at the top of the question (for visibility), something like Note: Fixed in version 10.0.1. Bug present in versions 8-9 and 10.0.0. ... This communicates all the relevant information clearly and concisely, and doesn't use up any tags.
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/1361", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/121/" ] }
1,847
I post a lot of questions and I always obtain learned answers. But rarely, if at any time, am I skilled enough to return the favor and post an answer. How can I contribute more to the community?
I am glad you wish to contribute to our community. One of the best ways to do this is by asking good questions, as noted in the comments. Be clear, concise, and considerate of future readers who may have a similar problem but not understand your circumstance as well as you do. You will find that many of the best questions are posted by users who do not have a particularly high "reputation" score, and IMHO the site would be impoverished without these contributions. In time you will probably find that you can answer questions. Many questions that are asked are quite simple in nature (for experienced users) but it can be wearisome to answer many similar questions, so if you are willing to step up and answer these simple questions it can free up the time of more advanced users to answer more deep questions. (Incidentally this is exactly what I did when I joined Stack Overflow when our community was centered there. To a significant degree it is what I still do though I like to think that I answer somewhat deeper questions with greater frequency now.) Many posts initially need some editing, especially those by new users. Performing this editing is a significant service to the community, as is instructing said users in correct formatting and the use of the editing tools. Posts that are "spam" or have serious problems should be flagged for moderator attention, and when you acquire the Close Vote "privilege" its well reasoned use will be much appreciated.
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/1847", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/12659/" ] }
1,861
E.g. here: Can Mathematica solve functional equations with nested variable? original pictures are way to big. Smaller pictures are improving readability, unless they are too small and you can't read the content. Can I quickly add a html tag or whatever? I'd go with import->resize->upload but I'm to lazy today :)
Imgur natively supports a handful of different sizes for the same image, which you can access by just changing the URL of the image (see the last letter of the URLs in 2-7 with respect the first one): Original: http://i.stack.imgur.com/kEZJ5.png Huge: http://i.stack.imgur.com/kEZJ5h.png Large: http://i.stack.imgur.com/kEZJ5l.png Medium: http://i.stack.imgur.com/kEZJ5m.png Thumbnail: http://i.stack.imgur.com/kEZJ5t.png Big square: http://i.stack.imgur.com/kEZJ5b.png Small square: http://i.stack.imgur.com/kEZJ5s.png So what I typically do is to display the image as a medium sized one, but link it to the high-res one: [![](http://i.stack.imgur.com/kEZJ5m.png)](http://i.stack.imgur.com/kEZJ5.png)
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/1861", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/5478/" ] }
2,578
I've noticed that you must wait sometimes days to find halfway decent answers at Wolfram Community but when posting similar questions at StackExchange Mathematica answers sometimes come within minutes and the are often very well written with pride coming from its members. Why is it that it seems we attract a more dedicated clique of participants?
Probably a combination of many factors. I will list some differences between SE sites (M SE in particular) and Wolfram Community, which I consider important in this context: Historically M SE (this site) is a successor of Stack Overflow Mathematica tag, which was the place where the original smaller group of enthusiasts had enough time to gather and build a strong community core. Stack Overflow Mathematica tag itself was empowered by an influx of users from MathGroup, which happened around 2010 - 2011. So M SE is in many ways a successor of both, which matters both in terms of culture of the M SE community and in terms of actual people involved. Wolfram Community came out later and had no such benefits, although of course some fraction of former MathGroup users have migrated there. Stack Exchange Q/A model has been optimized for answers with high information density. Each answer is a standalone piece of information, which is not supposed to be a start of any kind of a discussion thread (which is not the case for Wolfram Community model, where answers very frequently transform into long threads of discussions which lower the information density and make it harder to find the relevant part). This also makes SE answers to be easier to index and promptly returned by the search engines. Wolfram Community does not have a distinction between comments and answers. Stack Exchange has explicit distinction, and the two serve very different purpose. Comments are more or less a back door to allow some amount of discussions around the answers, which also serves as a source of extra fun and community-building device. But even so, extended comment discussions are also discouraged. Stack Exchange voting model, as well as things such as badges etc, make it fun to answer and compete with fellow users. While in the short term it does not guarantee the best answer to bubble to the top / get the most votes, in the long term this usually happens. While Wolfram Community has voting functionality, for some reason (lack of comments, may be?) the competing part is not there, which takes a big chunk of fun out. And because the answers there frequently turn into threads of replies, voting is not at all as effective as answer-ranking device either. Stack Exchange has a powerful community involvement model, where users can perform moderation, edit posts of others, etc. This brings extra fun and the sense of liveliness to the site. Stack Exchange sites have a very high degree of polish, coming from the manpower behind the site, the effort spent on development, and the long history of getting massive feedback from the community via meta sites and evolving the sites / rules accordingly. Wolfram Community can't be even remotely compared to SE in this regard, since it probably has just a single developer. M SE is not directly associated with Wolfram, while Wolfram Community is - for whatever difference that may make. On the other hand, Wolfram Community is much better suited for posting various explorations, tutorials, computational essays, and otherwise showing some work one may have done. Which is what I personally find it most valuable for. It also is useful as a source of official opinions and information on various topics coming directly from the company and fully endorsed by it. Besides, a lot of questions that don't fit M SE strict Q/A format, are allowed on Wolfram Community, which in many cases can also be considered a definite plus for Wolfram Community. So at the end of the day, I personally find both of them useful (albeit for different things). But as I have limited time to spend on this sort of activities, and can't afford to monitor both sites closely, I personally mostly monitor M SE, since I just find myself much more at home here. This is of course a very personal choice.
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/2578", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/37721/" ] }
1
From the front end, \[InvisibleApplication] can be entered as Esc @ Esc , and is an invisible operator for @ !. By an unfortunate combination of key-presses (there may have been a cat involved), this crept up in my code and I spent a great deal of time trying to figure out why in the world f x was being interpreted as f[x] . Example: Now there is no way I could've spotted this visually. The *Form s weren't of much help either. If you're careful enough, you can see an invisible character between f and x if you move your cursor across the expression. Eventually, I found this out only by looking at the contents of the cell. There's also \[InvisibleSpace] , \[InvisibleComma] and \[ImplicitPlus] , which are analogous to the above. There must be some use for these (perhaps internally), which is why it has been implemented in the first place. I can see the use for invisible space (lets you place superscripts/subscripts without needing anything visible to latch on to), and invisible comma (lets you use indexing like in math). It's the invisible apply that has me wondering... The only advantage I can see is to sort of visually obfuscate the code. Where (or how) is this used (perhaps internally?), and can I disable it? If it's possible to disable, will there be any side effects?
It is used in TraditionalForm output, e.g. here: TraditionalForm[ Hypergeometric2F1[a,b,c,x] ] Without \[InvisibleApplication] it would probably be hard for Mathematica to parse it back to InputForm . Probably it is used in more places internally. In order to get rid of it: Locate the file UnicodeCharacters.tr in /usr/local/Wolfram/Mathematica/8.0/SystemFiles/FrontEnd/TextResources (or the equivalent under Windows or MacOSX), make a backup of the file, open it and delete the line 0xF76D \[InvisibleApplication] ($@$ ... Then your cat can jump on the keyboard again.
{ "source": [ "https://mathematica.stackexchange.com/questions/1", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5/" ] }
2
Cases , Select , Pick and Position each have different syntaxes and purposes, but there are times when you can express the same calculation equivalently using either of them. So with this input: test = RandomInteger[{-25, 25}, {20, 2}] {{-15, 13}, {-8, 16}, {-8, -19}, {7, 6}, {-21, 9}, {-3, -25}, {21, -18}, {4, 4}, {2, -2}, {-24, 8}, {-17, -8}, {4, -18}, {22, -24}, {-4, -3}, {21, 0}, {19, 18}, {-23, -8}, {23, -25}, {14, -2}, {-1, -13}} You can get the following equivalent results: Cases[test, {_, _?Positive}] {{-15, 13}, {-8, 16}, {7, 6}, {-21, 9}, {4, 4}, {-24, 8}, {19, 18}} Select[test, #[[2]] > 0 &] {{-15, 13}, {-8, 16}, {7, 6}, {-21, 9}, {4, 4}, {-24, 8}, {19, 18}} Pick[test, Sign[test[[All, 2]] ], 1] {{-15, 13}, {-8, 16}, {7, 6}, {-21, 9}, {4, 4}, {-24, 8}, {19, 18}} test[[Flatten@Position[test[[All, 2]], _?Positive] ]] {{-15, 13}, {-8, 16}, {7, 6}, {-21, 9}, {4, 4}, {-24, 8}, {19, 18}} Are there performance or other considerations that should guide which you should use? For example, is the pattern-matching used in Cases likely to be slower than the functional tests used in Select ? Are there any generic rules of thumb, or is testing the particular case you are using the only solution?
In my view, Cases and Position are in one camp (pattern-based functions used for general expression destructuring), while Select is in another: (more) special-purpose functions optimized to work on certain efficient data structures. As was mentioned already, both Cases and Select do generally unpack when used with packed arrays. What wasn't mentioned is that Select can easily be compiled, including compilation to C: sel = Compile[{{lst, _Integer, 2}}, Select[lst, #[[2]] > 0 &], CompilationTarget -> "C", RuntimeOptions -> "Speed"]; In[98]:= (res3 = sel[test]);//Timing Out[98]= {0.125,Null} which gives an order of magnitude speed-up in the case at hand. Needless to say, Cases , being a general function using patterns, cannot be compiled and any attempt to do so will result in a callback to the main evaluator in the compiled code, which destroys the purpose. Another difference is that Select can also work on sparse arrays, while Cases and Position can't. OTOH, Cases and Position are more general in that they can work on arbitrary expressions (not necessarily packed or even regular arrays), and at an arbitrary level. If you happen to have an (even numerical) irregular nested list, where you can't utilize packing, Cases and Position may be able to do things Select can't ( Select is limited to one level only). Performance-wise, Cases / Position can also be very efficient, if the test patterns are constructed properly (mostly syntactic patterns, with no Condition or PatternTest involved, and preferably not containing things like __ , ___ etc as sub-parts). There are instances when Cases ( Position also, but not as much) are practially indispensable, and this is when you want to collect some information about the expression, while preventing its parts from evaluation. For example, getting all symbols involved in an expression expr , in unevaluated form, wrapped in HoldComplete (say), is as simple as this: Cases[expr, s_Symbol :> HoldComplete[s], {0, Infinity}, Heads -> True] and quite efficient as well. Generally, patterns and destructuring are very (perhaps most) powerful metaprogramming tools that Mathematica provides. So, my final advice is this: when you have an expression with a fixed regular structure, or even better, numerical packed array, Select or other more precise operations ( Pick etc) may be advantageous, and also more natural. When you have some general (perhaps symbolic) expression, and want to get some non-trivial information from it, Cases , Position and other pattern-based functions may be a natural choice.
{ "source": [ "https://mathematica.stackexchange.com/questions/2", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8/" ] }
6
A very common feature of graphs of functions used throughout textbooks are simple indications, such as open circles, of the points at which a function is undefined. Is there a way to do this in Mathematica? I understand that I can use Epilog (or Prolog , or other explicit drawing graphics functions) to "manually" draw these elements, but us there a way to automatically do this, or at least return the coordinates of such points for use as arguments to Epilog ?
By using Exclusions and ExclusionsStyle like this Plot[1/Sin[π x], {x, -3, 3}, Exclusions -> Sin[π x] == 0, ExclusionsStyle -> {Thin, Directive[Red, PointSize[0.02]]} ] you can define the location of the discontinuities and the style for poles and discontinuity boundaries, respectively. Also the Wolfram Blog, March 2011 has more examples on this topic.
{ "source": [ "https://mathematica.stackexchange.com/questions/6", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37/" ] }
8
Consider a block (partitioned) matrix matrix = ArrayFlatten[{{a, b}, {c, d}}] where, a , b , c and d are each matrices themselves. Say, for example, a = {{a11, a12}, {a21, a22}} b = {{b11, b12}, {b21, b22}} c = {{0, 0}, {0, 0}} d = {{d11, d12}, {d21, d22}} How can you find the block inverse of this matrix? A desired solution is, using the example above {{Inverse[a] , -Inverse[a].b.Inverse[d]},{0,Inverse[d]}} which is easily verified using Simplify[Inverse[ArrayFlatten[{{a, b}, {c, d}}]] == ArrayFlatten[{{Inverse[a], -Inverse[a].b.Inverse[d]}, {0, Inverse[d]}}]] which yields True . How can you solve the block inverse problem for arbitrary submatrices, and for block matrices of larger sizes (i.e. 3x3, 4x4, etc)?
Mathematica does not support this directly. You can do things of this sort using an external package called NCAlgebra. http://math.ucsd.edu/~ncalg/ The relevant documentation may be found at http://math.ucsd.edu/~ncalg/DOWNLOAD2010/DOCUMENTATION/html/NCBIGDOCch4.html#x8-510004.4 In particular have a look at "4.4.8 NCLDUDecomposition[aMatrix, Options]" Using this package, you would find the block inverse of the example matrix using: c=0; inverse = NCInverse[matrix] (* Out[] = {{inv[a], -inv[a] ** b ** inv[d]}, {0, inv[d]}} *) Here inv[a] represents the general inverse of the a block of the matrix and the ** represents non-commutative (i.e. matrix) multiplication. This approach works for larger (3x3, 4x4, etc) square block matrices as well.
{ "source": [ "https://mathematica.stackexchange.com/questions/8", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9/" ] }
16
Sometimes, I use Mathematica to do some hypothesis on homework to make the question easier. For instance, when I have to compute big sums when $n\to\infty$ and Mathematica can't give the exact answer, I set $n$ to a very big number and I get an approximate value. Is it possible with this value to get possible closed forms just like in Wolfram Alpha (which by the way runs Mathematica 's kernel) ? Example : I have to find out the sum of all $\frac{1}{2p!}$ when $p$ goes from $0$ to $+\infty$ and let's say Mathematica doesn't directly say it's $\frac{e}{2}$, but it gives 1.66 for a big value of $n$. Is there a function which can figure out which constant is near this number?
I can offer a round-about method. First compute the numerical approximation. I obtain, to high precision, In[24]:= N[Sum[1/(2*n!), {n, 0, 100}], 100] Out[24]= 1.\ 3591409142295226176801437356763312488786235468499797874834838138620383\ 15176773797285691089262583214 Now paste that into a Wolfram|Alpha query, accessed by clicking on the '+' sign at upper left of a fresh input cell. This gives, among other things, possible closed forms. To the best of my knowledge, the heuristic methods used by W|A for this task are not directly exposed in any other way in Mathematica proper.
{ "source": [ "https://mathematica.stackexchange.com/questions/16", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/44/" ] }
18
I consider myself a pretty good Mathematica programmer, but I'm always looking out for ways to either improve my way of doing things in Mathematica , or to see if there's something nifty that I haven't encountered yet. Where (books, websites, etc.) do I look for examples of good (best?) practices of Mathematica programming?
Here's a collection of resources that I started on Mathgroup ( a collection of Mathematica learning resources ) and updated here at Stack Overflow . As this site is dedicated to Mathematica it makes more sense to maintain it here. This represents a huge amount of information; of course it's not exhaustive so feel free to improve it! Also, don't hesitate to share it and suggest other interesting links! Remember, you can always search the online Documentation Center of Mathematica , that is identical to the built-in help of the latest software version. Links to more advanced aspects of the program that you can start to appreciate once you understand the basics are provided in separate answers (below) as this post became too large. Tips and Tricks Advanced evaluation, patterns and neat algorithms Introduction If you're just beginning try to have a look at these videos. Mathematica Basics , Elementary Programming in Mathematica Hands-on Start to Mathematica Several introductory videos by Jon McLoone and many other video introductions and tutorials from the official Wolfram website An elementary introduction to the Wolfram language Fast introduction for programmers Is it necessary to have a prior computational background or is it possible to learn Mathematica as a first programming language? What are the most common pitfalls awaiting new users? How To-s : full solutions for particular tasks from the online documentation Easy-to-understand animations explaining common Mathematica functions Sal Mangano's videos for using pure functions, Part and patterns Introductory videos of various applications of Mathematica What is the best Mathematica tutorial for young people? Basic advices for people new to Mathematica Functional style Avoid iterative programming using loops like For or Do , use instead functional programming functions Map , Scan , MapThread , Fold , FoldList , ... and pure functions. This makes the code cleaner and faster. Functional Programming , Functional Programming: Quick Start Pure functions What does # mean in Mathematica? Alternatives to procedural loops and iterating over lists in Mathematica An example: Programming a numerical method in the functional style How to understand the usage of Inner and Outer figuratively? Transpose and dimensions Something not easy to guess alone at the beginning: if you have x={1,2} and y={3,4} , doing Transpose[{x,y}] or {x,y} ESC tr ESC in the front end will produce {{1,3},{2,4}} (format compatible with ListPlot ). This animation helps understand why. You can also use the second argument of Transpose to reorder the indices of a multidimensional list. Don't forget to regularly control the output of the lists you generate using Dimensions . Get familiar with shorthand syntax ( @ , & , ## , /@ , /. , etc.) Operator Input Forms when is f@g not the same as f[g]? Programming easily Getting help : Execute ?Map for example for a short description of a function, or press F1 on a function name for more details and examples about it. You can solve many problems by adapting examples to your needs. Auto-completion : Start typing the name of a function and (in Mathematica 9+) select from the pop-up auto-completion menu, or press Ctrl + k to get a list of functions which names start with what has already been entered. Once the name of the function is written completely press Ctrl + Shift + k (on Mac, Cmd + k ) to get a list of its arguments. Function templates : In Mathematica 9, after typing a function name, press Ctrl + Shift + k (on Mac, Cmd + Shift + k ) and click on the desired form from the pop-up menu to insert a template with named placeholders for the arguments. Other useful shortcuts are described in the post Using the Mathematica front-end efficiently for editing notebooks . Use palettes in the Palettes menu especially when you're beginning. In Mathematica 8, use the natural input capability of Wolfram Alpha, for example type "= graph 2 x + 1 between 0 and 3" without the quotes and see the command associated with the result. Tutorials An elementary introduction to the Wolfram language , by Stephen Wolfram Fast introduction for programmers Fundamentals of Mathematica Programming (by Richard Gaylord, great tutorial for an overview of the logic behind Mathematica : patterns) Video tutorial also available Introduction to Mathematica (by Thomas Hahn, another succinct overview of Mathematica) Tutorial Collection by WRI (lots of extra documentation and examples, available as free PDFs, also available and up-to-date in Help > Virtual Book in Mathematica ). Programming Paradigms via Mathematica (A First Course) Mathematica Tutorial: A New Resource for Developers Wolfram's Mathematica 101 http://bmia.bmt.tue.nl/Software/Downloads/Campus/TrainingMathematicaEnglish.zip http://bmia.bmt.tue.nl/Software/Mathematica/Tutorials/index.html A problem centered approach A beginner's guide to Mathematica http://math.sduhsd.net/MathematiClub/tutorials.htm http://www.austincc.edu/mmcguff/mathematica/ http://www.mtholyoke.edu/courses/hnichols/phys303/ http://www.apam.columbia.edu/courses/ap1601y/ (Introduction to Computational Mathematics and Physics) http://ftp.physics.uwa.edu.au/pub/MATH2200/2012/Lectures/ (Applied Mathematics) http://ftp.physics.uwa.edu.au/pub/MATH2200/2009/Lectures (path for some lectures in pdf) http://en.wikibooks.org/wiki/Mathematica http://www.cs.purdue.edu/homes/ayg/CS590C/www/mathematica/math.html (Basic tutorial) https://stackoverflow.com/questions/4430998/mathematica-what-is-symbolic-programming (What is symbolic programming) http://www.cer.ethz.ch/resec/people/tsteger/Econ_Model_Math_1.pdf http://www.physics.umd.edu/enp/jjkelly (An introduction to Mathematica as well as some physics courses) Do you know of any web-based university course that is entirely Mathematica based? http://homepage.cem.itesm.mx/jose.luis.gomez/data/mathematica (Tutorials in Spanish) Mathematica programming (some examples of the various programming paradigms that can be used in Mathematica) FAQ http://12000.org/my_notes/faq/mma_notes/MMA.htm (FAQ) https://stackoverflow.com/questions/tagged/mathematica?sort=faq&pagesize=15 (FAQ on Stack Overflow) https://mathematica.stackexchange.com/questions?sort=faq (FAQ on this site) http://library.wolfram.com/conferences/conference98/Lichtblau/SymbolicFAQ.nb (Symbolic FAQ) Books Stephen Wolfram's The Mathematica Book (online, version 5.2), available for free Mathematica programming: an advanced introduction (online) by Leonid Shifrin, available for free Tutorial Collection by WRI (lots of extra documentation and examples, available as free pdfs, also available and up-to-date in Help > Virtual Book in Mathematica ). Mathematica Cookbook by Sal Mangano (O'Reilly, 2010) Mathematica in Action by Stan Wagon (Springer, 2010) Mathematica: A Problem-Centered Approach by Roozbeh Hazrat (Springer, 2010) Mathematica Navigator by Heikki Ruskeepaa (Academic Press, 2009) The Mathematica GuideBooks (for Programming , Numerics , Graphics , Symbolics ) by Michael Trott (Springer, 2004-2005) An introduction to programming with Mathematica by Paul R. Wellin, Richard J. Gaylord and Samuel N. Kamin (Cambridge University Press, 2005); contains an example of Domain Specific Language (DSL) creation. Mastering Mathematica by John W. Gray (Academic Press, 1997) Programming in Mathematica by Roman Maeder (Addison-Wesley Professional, 1997) Programming with Mathematica®: An Introduction by Paul Wellin (Cambridge University Press, 2013) Power Programming With Mathematica: The Kernel , by David B. Wagner (Mcgraw-Hill, 1997), out of print but scanned copy available here . http://blog.wolfram.com/2014/01/10/read-up-on-mathematica-in-many-subjects Wolfram Websites Learn http://www.wolfram.com/broadcast/ http://www.wolfram.com/training/courses (Online video courses, most are free) http://www.wolfram.com/training/special-event/ (Links to videos of past conferences) Slides of seminars http://www.youtube.com/user/WolframResearch An elementary introduction to the Wolfram language Fast introduction for programmers Data drop quick reference Examples http://demonstrations.wolfram.com How To-s http://www.wolfram.com/mathematica/new-in-8 http://www.wolfram.com/mathematica/new-in-9 http://www.wolfram.com/mathematica/new-in-10/ http://www.wolfram.com/mathematica/new-in-11/ http://www.wolfram.com/training/special-event/new-in-mathematica-10/ A plot gallery for Mathematica 9 http://www.wolfram.com/language/ Resources http://www.wolfram.com/mathematica/resources http://library.wolfram.com/ (Great amount of resources here) http://support.wolfram.com/kb/topic/mathematica (Knowledge base) http://www.mathematica-journal.com Help Help > Virtual Book http://www.wolfram.com/support/learn/ http://www.wolfram.com/books/ http://reference.wolfram.com Blogs http://community.wolfram.com http://blog.wolfram.com http://blog.wolframalpha.com http://blog.stephenwolfram.com http://twitter.com/#!/mathematicatip Other related sites http://www.mathematica25.com SMP http://blog.stephenwolfram.com/2013/06/there-was-a-time-before-mathematica http://blog.stephenwolfram.com/data/uploads/2013/06/SMPHandbook.pdf http://www.wolframalpha.com Wolfram Science : the official site of Stephen Wolfram's New Kind of Science NKS forum Lecture notes from NKS summer schools Programs from the notes Demonstrations http://computerbasedmath.org/ http://education.wolfram.com (Some interactive basic math courses, useful for curious young people) http://www.wolfram.com/webresources.html (other Mathematica related sites) Virtual conferences http://www.wolfram.com/events/virtual-conference/spring-2013 http://www.wolfram.com/events/virtual-conference/2012 http://www.wolfram.com/events/virtual-conference/2011 Mathematica one-liner competition http://www.wolfram.com/events/techconf2010/competition.html http://www.wolfram.com/events/technology-conference/2011/one-liners.html http://www.wolfram.com/training/special-event/mathematica-experts-live-one-liner-competition-2012 Wolfram technology conferences http://www.wolfram.com/events/technology-conference/2016 2015 , http://www.wolfram.com/events/technology-conference/2015 2014 , http://www.wolfram.com/events/technology-conference/2014 2013 , http://www.wolfram.com/events/technology-conference/2013 2012 , http://www.wolfram.com/events/technology-conference/2012 2011 , http://www.wolfram.com/events/technology-conference/2011 2010 , http://www.wolfram.com/events/techconf2010 2009 , 2007 , 2006 , 2005 , 2004 , 2003 , 2001 , 1999 , 1998 , 1997 , 1994 , 1992 http://library.wolfram.com/infocenter/Conferences/
{ "source": [ "https://mathematica.stackexchange.com/questions/18", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/50/" ] }
22
When I use RegionPlot to plot the region between two functions, I get strange gaps in the resulting figure. Is there a way to prevent this from happening? For example, RegionPlot[x^2 < y && y < x^4, {x, -3, 3}, {y, 0, 3}] produces the following strange result:
Just increase the number of PlotPoints RegionPlot[x^2 < y && y < x^4, {x, -3, 3}, {y, 0, 3}, PlotPoints -> 100]
{ "source": [ "https://mathematica.stackexchange.com/questions/22", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37/" ] }
40
I had always wondered if there might be a way to write a function, which I'll call OEISData[] , that more or less works as a curated data function for The On-Line Encyclopedia of Integer Sequences . I would imagine that the usage might be a little something like this: OEISData["A004001"][9] 5 OEISData["A003418"][Range[8, 15]] 840, 2520, 2520, 27720, 27720, 360360, 360360, 360360 OEISData["A005849", "Keywords"] {"hard", "nonn", "nice", "more"} An API or something to retrieve data from the OEIS site might be needed for an implementation of this function. Is a function like this possible, with what Mathematica is currently capable of?
There is a Mathematica package exactly for this at the OEIS wiki . Somewhat related: there's also a package for formatting data into the OEIS format . WolframAlpha also has some of this information, though I'm not sure how to get the $n^{\mathrm{th}}$ term of the sequence. In[1] := WolframAlpha["A004001", {{"TermsPod:IntegerSequence", 1}, "ComputableData"}] Out[1] = {1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 7, 7, 8, 8, 8, 8, 9, 10, 11, 12, 12, 13, 14, 14, 15} Or: In[1] := WolframAlpha["A018900", {{"Continuation", 1}, "ComputableData"}] Out[1] = {3, 5, 6, 9, 10, 12, 17, 18, 20, 24, 33, 34, 36, 40, 48, 65, 66, 68, 72}
{ "source": [ "https://mathematica.stackexchange.com/questions/40", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/50/" ] }
52
Background Info In Mathematica, it's only possible to texture map a sphere through the use of SphericalPlot3D or ParametricPlot3D . image = Import["MyTexture.jpg"]; (* Pretend this is something you'd use *) sphere = SphericalPlot3D[1, {theta, 0, Pi}, {phi, 0, 2 Pi}, Mesh -> None, TextureCoordinateFunction -> ({#5, 1 - #4} &), PlotStyle -> Directive[Texture[image]], Lighting -> "Neutral", Axes -> False, Boxed -> False]; Now that's all nice and everything. But what if we want multiple spheres on the same exact image, each with an arbitrary size? To position a single sphere is simple : s1 = Graphics3D[ Translate[First@sphere, {3, 2, 1}], Lighting -> "Neutral"] Then you just position each sphere and Show them together: Show[{s1, s2}, PlotRange->{{-5, +5}, {-5, +5}, {-5, +5}}] The Issue That's great and all, but what if you need each sphere to be a distinct size ? Positioning and sizing regular Sphere[] primitives is easy and built directly into their definition. But if I want the same for a textured sphere, I have to jump through all these hoops. Furthermore, it's not obvious how I can achieve this. Any ideas on how I can achieve arbitrary placement and sizing of textured spheres?
You could use a combination of Translate and Scale . Suppose the radii and centres of the circles are given by radii = RandomReal[{.1, .6}, 8]; centres = RandomReal[{-2, 2}, {8, 3}]; Then using the original sphere image = ExampleData[{"ColorTexture", "GiraffeFur"}]; sphere = SphericalPlot3D[1, {theta, 0, Pi}, {phi, 0, 2 Pi}, Mesh -> None, TextureCoordinateFunction -> ({#5, 1 - #4} &), PlotStyle -> Directive[Texture[image]], Lighting -> "Neutral", Axes -> False, Boxed -> False]; You could do for example Graphics3D[MapThread[Translate[Scale[sphere[[1]], #1], #2] &, {radii, centres}]] Which produces something like this
{ "source": [ "https://mathematica.stackexchange.com/questions/52", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/95/" ] }
53
Dynamic programming is a technique for avoiding the repeated computation of the same values in a recursive program. Each value computed is immediately stored. If the value is needed again, it is not computed but simply looked up in the table. (1) I use orthogonal polynomials a fair bit in my work. Since Mathematica supports only the classical ones , I often have to write my own functions. For instance, the monic Charlier polynomials satisfy the three-term recurrence $$C_{n+1}^{(a)}(x)=(x-a-n)C_n^{(a)}(x)-an C_{n-1}^{(a)}(x)$$ with $C_0^{(a)}(x)=1$ and $C_1^{(a)}(x)=x-a$. If I want to be able to use monic Charlier polynomials in Mathematica , I can do this: CharlierC[0, a_, x_] := 1; CharlierC[1, a_, x_] := x - a; CharlierC[n_Integer, a_, x_] := (x - a - n + 1) CharlierC[n - 1, a, x] - a (n - 1) CharlierC[n - 2, a, x] The problem with this route, of course, is that the effort expended to generate, say, CharlierC[20, a, x] can't be used for evaluating CharlierC[50, a, x] . For a one-argument recursive function (e.g. Fibonacci), dynamic programming is fine and dandy for saving evaluation effort. For a multiple-argument function, imagine what would happen if one had used the definition CharlierC[n_Integer, a_, x_] := CharlierC[n, a, x] = (* stuff *) and then executed Plot[{CharlierC[5, 1, x], CharlierC[6, x, 2]}, {x, -1, 1}] . Is there a way to reap the benefits of dynamic programming on a multiple-argument function, while storing only results where the recursion variable ( n in the Charlier example) changes?
Yes, there is, although the speed-up is not as dramatic as for 1D memoization: ClearAll[CharlierC]; CharlierC[0, a_, x_] := 1; CharlierC[1, a_, x_] := x - a; CharlierC[n_Integer, a_, x_] := Module[{al, xl}, Set @@ Hold[CharlierC[n, al_, xl_], Expand[(xl - al - n + 1) CharlierC[n - 1, al, xl] - al (n - 1) CharlierC[n - 2, al, xl] ]]; CharlierC[n, a, x] ]; (Thanks to @Mike Bantegui for pointing out the wastefulness of Simplify , which has now been removed). What you memoize here are function definitions. Expand is used to not accumulate the complexity too fast. The idea is that I first create a new pattern-based definition, using a number of tricks to fool the scoping variable - renaming mechanism but localize pattern variables, and then evaluate this definition. For example: In[249]:= CharlierC[20,a,x];//Timing Out[249]= {0.063,Null} In[250]:= CharlierC[25,a,x];//Timing Out[250]= {0.078,Null} While with clear definitions: In[260]:= CharlierC[25,a,x];//Timing Out[260]= {0.094,Null} Here are a first few generated definitions: In[262]:= Take[DownValues[CharlierC],4] Out[262]= {HoldPattern[CharlierC[0,a_,x_]]:>1, HoldPattern[CharlierC[1,a_,x_]]:>x-a, HoldPattern[CharlierC[2,al$4106_,xl$4106_]]:> al$4106^2-xl$4106-2 al$4106 xl$4106+xl$4106^2, HoldPattern[CharlierC[3,al$4105_,xl$4105_]]:> -al$4105^3+2 xl$4105+3 al$4105 xl$4105+3 al$4105^2 xl$4105 -3 xl$4105^2-3 al$4105 xl$4105^2+xl$4105^3}
{ "source": [ "https://mathematica.stackexchange.com/questions/53", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/50/" ] }
61
I have typically used PowerPoint or plain PDFs of slides to give presentations, but with heavy mathematical content, it can be tedious to create these presentations and make them look good. How can I best make use of Mathematica to give presentations? (I would prefer a slide-by-slide type format to what I've seen a few people do—using a regular notebook with the font size pumped up and collapsing/expanding sections as they go along.)
You can create SlideShows using Mathematica and run it to demonstrate presentation. Main advantage of using such Slideshow over Powerpoint / PDF is that you can play dynamic content. This Link gives further details on how to create http://reference.wolfram.com/mathematica/howto/CreateASlideShow.html This screencast gives detailed steps on how to create slideshows http://www.wolfram.com/broadcast/screencasts/howtocreateaslideshow/?w=800&h=600 This notebook has some cool tips http://library.wolfram.com/infocenter/TechNotes/5299/Tips.nb?file_id=5035 With CDF format available with Mathematica now , Presentation can be saved in cdf format and can be presented using any browser in which CDF Player is installed Quick tips for Inpatients ! Create Slide Show File -> New -> Slide Show Open Slide Show Palette Palettes -> Slide Show Run Slide Show View Environment -> SlideShow Run in Full Screen Mode Presentation Size -> Full Screen
{ "source": [ "https://mathematica.stackexchange.com/questions/61", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7/" ] }
96
When reading through the documentation, you often encounter the phrases DownValues , UpValues , SubValues , and OwnValues . I am trying to understand the distinction between the four of them, and how, for a given Symbol , I can use them. Also, are they processed differently internally?
In Mathematica, all functions are really just patterns, and there are different kinds of those. Let's start with OwnValues , which is the pattern type of a variable as you know it from other programming languages. The symbol having the OwnValue has, as the name suggests, intrinsic, "own", value. In[1] := a = 2; OwnValues[a] Out[1] := {HoldPattern[a] :> 2} A DownValue is defined when the variable itself does not have a meaning, but can get one when combined with the proper arguments. This is the case for the most function definitions f[x_] := x^2 This defines a pattern for f specifying that each time f[...] is encountered, it is to be replaced by ...^2 . This pattern is meaningless if there is a lonely f , In[2] := f Out[2] := f However, when encountered with an argument downwards (i.e. down the internal structure of the command you entered), the pattern applies, In[3] := f[b] Out[3] := b^2 You can see the generated rule using In[4] := DownValues[f] Out[4] := {HoldPattern[f[x_]] :> x^2} The next type of pattern concerns UpValues . Sometimes, it's convenient not to associate the rule to the outermost symbol. For example, you may want to have a symbol whose value is 2 when it has a subscript of 1 , for example to define a special case in a sequence. This would be entered as follows: c /: Subscript[c, 1] := 2 If the symbol c is encountered, neither of the discussed patterns apply. c on its own has no own hence no OwnValue , and looking down the command tree of c when seeing Subscript[c,1] yields nothing, since c is already on an outermost branch. An UpValue solves this problem: a symbol having an UpValue defines a pattern where not only the children, but also the parents are to be investigated, i.e. Mathematica has to look up the command tree to see whether the pattern is to be applied. In[5] := UpValues[c] Out[5] := {HoldPattern[Subscript[c, 1]] :> 2} The last command is SubValues , which is used for definitions of the type d[e][f] = x; This defines neither an OwnValue nor a DownValue for d , since it does not really define the value for the atomic object d itself, but for d[e] , which is a composite. Read the definition above as (d[e])[f]=x . In[6] := SubValues[d] Out[6] := {HoldPattern[d[e][f]] :> x} (Intuitively, an OwnValue for d[e] is created, however calling for this results in an error, i.e. OwnValues[d[e]] creates Argument d[e] at position 1 is expected to be a symbol. )
{ "source": [ "https://mathematica.stackexchange.com/questions/96", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/52/" ] }
98
I often need to compute the eigenvalues of large matrices, and I invariably resort to MATLAB for these, simply because it is much faster. I'd like to change that, so that I can work entirely inside my notebook. Here's a plot comparing the timings between the two for eigenvalue decompositions for matrices of varying sizes (left). The y-axis shows the time in seconds. As you can see, there's about a factor 3 difference between the two (right). Here's a sample code in Mathematica to generate timings: timings = With[{x = RandomReal[NormalDistribution[], {#, #}]}, Eigenvalues[x]; // Timing // First] & /@ Range[500,5000,500] and its equivalent in MATLAB: s = 500:500:5000; t = zeros(numel(s),1); for i = 1:numel(s) x=randn(s(i)); t1=tic;eig(x);t(i)=toc(t1); end I do not think that Mathematica's algorithms are inefficient, as the fastest algorithms for eigenvalue decompositions (in the general case, not exploiting symmetry and such) are $\mathcal{O}(N^{2.376})$ and the timings both MATLAB's and Mathematica's implementations have the same correct slope on a log-log plot. I suspected unpacking in the background during the call to Eigenvalues and turning On["Packing"] confirms this. However, I don't think this alone could be the cause for a 3 fold speed reduction. I'm not expecting the timings to be exact either, as I understand that arrays and matrices are baked into the core of one and not the other, which can lead to performance differences. However, I'm interested in knowing if there are reasons other than the simplified one I gave above for the difference in timings and there ways in which I can improve the speeds or at least, reduce the difference by some amount. Or is this something that one has to accept as a fact of life?
Mathematica is every bit as fast as Matlab for these types of computations. The source of the discrepancy arises from the fact that Timing keeps track of total time used by all processors when Mathematica distributes the computation across them. We can examine a fair comparison using AbsoluteTiming , which is more comparable to Matlab's tic and toc . Consider the following computed on my Macbook Pro: t1 = First[Timing[Eigenvalues[RandomReal[{0, 1}, {1000, 1000}]]]]; t2 = First[AbsoluteTiming[Eigenvalues[RandomReal[{0, 1}, {1000, 1000}]]]]; {t1, t2} {5.16576, 1.329784} Again, the only difference is the use of Timing versus AbsoluteTiming . You can watch the wall clock to convince yourself that the faster time is accurate. Let's try this with with the OP's code: timingsGood = With[{x = RandomReal[NormalDistribution[], {#, #}]}, Eigenvalues[x]; // AbsoluteTiming // First] & /@ Range[500, 5000, 500]; timingsBad = With[{x = RandomReal[NormalDistribution[], {#, #}]}, Eigenvalues[x]; // Timing // First] & /@ Range[500, 5000, 500]; Column[{timingsGood, timingsBad, timingsBad/timingsGood}] Note that the (incorrect) Timing result is always consistently about three times longer than the (correct) AbsoluteTiming result, which accounts just about exactly for the OP's observations. I ran a suite of numerical comparisons that I created several years ago. Here are my results: There are differences. Matlab is notably faster at singular value, Cholesky, and QR factorizations. Mathematica is slightly faster at sparse eigenvalue computations. They seem to be generally quite close to one another. There are a few other types of computations as well. Symbolically, Mathematica is way faster than Matlab's symbolic toolbox.
{ "source": [ "https://mathematica.stackexchange.com/questions/98", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5/" ] }
104
I used the code below (which is a sample from this gist containing more similar code ) in my answer to my own question about Mandelbrot-like sets for functions other than the simple quadratic on Math.SE to generate this image: cosineEscapeTime = Compile[{{c, _Complex}}, Block[{z = c, n = 2, escapeRadius = 10 \[Pi], maxIterations = 100}, While[And[Abs[z] <= escapeRadius, n < maxIterations], z = Cos[z] + c; n++]; n]] Block[{center = {0.5527, 0.9435}, radius = 0.1}, DensityPlot[ cosineEscapeTime[x + I y], {x, center[[1]] - radius, center[[1]] + radius}, {y, center[[2]] - radius, center[[2]] + radius}, PlotPoints -> 250, AspectRatio -> 1, ColorFunction -> "TemperatureMap"]] What could I do to improve the speed/time-efficiency of this code? Is there any reasonable way to parallelize it? (I'm running Mathematica 8 on an 8-core machine.) edit Thanks all for the help so far. I wanted to post an update with what I'm seeing based on the answers so far and see if I get any further refinements before I accept an answer. Without going to hand-written C code and/or OpenCL/CUDA stuff, the best so far seems to be to use cosineEscapeTime as defined above, but replace the Block[...DensityPlot[]] with: Block[{center = {0.5527, 0.9435}, radius = 0.1, n = 500}, Graphics[ Raster[Rescale@ ParallelTable[ cosineEscapeTime[x + I y], {y, center[[2]] - radius, center[[2]] + radius, 2 radius/(n - 1)}, {x, center[[1]] - radius, center[[1]] + radius, 2 radius/(n - 1)}], ColorFunction -> "TemperatureMap"], ImageSize -> n] ] Probably in large part because it parallelizes over my 8 cores, this runs in a little under 1 second versus about 27 seconds for my original code (based on AbsoluteTiming[] ).
Use these 3 components: compile, C, parallel computing. Also to speed up coloring instead of ArrayPlot use Graphics[Raster[Rescale[...], ColorFunction -> "TemperatureMap"]] In such cases Compile is essential. Compile to C with parallelization will speed it up even more, but you need to have a C compiler installed. Note difference for usage of C and parallelization may show for rather greater image resolution and more cores. mandelComp = Compile[{{c, _Complex}}, Module[{num = 1}, FixedPoint[(num++; #^2 + c) &, 0, 99, SameTest -> (Re[#]^2 + Im[#]^2 >= 4 &)]; num], CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True]; data = ParallelTable[ a + I b, {a, -.715, -.61, .0001}, {b, -.5, -.4, .0001}]; Graphics[Raster[Rescale[mandelComp[data]], ColorFunction -> "TemperatureMap"], ImageSize -> 800, PlotRangePadding -> 0] This is just a prototype - you can figure out a better coloring. Another way is to use LibraryFunction - we have Mandelbrot built in: mlf = LibraryFunctionLoad["demo_numerical", "mandelbrot", {Complex}, Integer]; n = 501; samples = Table[mlf[x + I y], {y, -1.25, 1.25, 2.5/(n - 1)}, {x, -2., .5, 2.5/(n - 1)}]; colormap = Function[If[# == 0, {0., 0., 0.}, Part[r, #]]] /. r -> RandomReal[1, {1000, 3}]; Graphics[Raster[Map[colormap, samples, {2}]], ImageSize -> 512] Now, if you have a proper NVIDIA graphics card you can do some GPU computing with CUDA or OpenCL. I use OpenCL here because I got the source (from documentation btw): Needs["OpenCLLink`"] src = " __kernel void mandelbrot_kernel(__global mint * set, float zoom, \ float bailout, mint width, mint height) { int xIndex = get_global_id(0); int yIndex = get_global_id(1); int ii; float x0 = zoom*(width/3 - xIndex); float y0 = zoom*(height/2 - yIndex); float tmp, x = 0, y = 0; float c; if (xIndex < width && yIndex < height) { for (ii = 0; (x*x+y*y <= bailout) && (ii < MAX_ITERATIONS); \ ii++) { tmp = x*x - y*y +x0; y = 2*x*y + y0; x = tmp; } c = ii - log(log(sqrt(x*x + y*y)))/log(2.0); if (ii == MAX_ITERATIONS) { set[3*(xIndex + yIndex*width)] = 0; set[3*(xIndex + yIndex*width) + 1] = 0; set[3*(xIndex + yIndex*width) + 2] = 0; } else { set[3*(xIndex + yIndex*width)] = ii*c/4 + 20; set[3*(xIndex + yIndex*width) + 1] = ii*c/4; set[3*(xIndex + yIndex*width) + 2] = ii*c/4 + 5; } } } "; MandelbrotSet = OpenCLFunctionLoad[src, "mandelbrot_kernel", {{_Integer, _, "Output"}, "Float", "Float", _Integer, _Integer}, {16, 16}, "Defines" -> {"MAX_ITERATIONS" -> 100}]; width = 2048; height = 1024; mem = OpenCLMemoryAllocate[Integer, {height, width, 3}]; res = MandelbrotSet[mem, 0.0017, 8.0, width, height, {width, height}]; Image[OpenCLMemoryGet[First[res]], "Byte"] References: Fractals CDF paper Compile to C LibraryFunction OpenCL Demonstrations
{ "source": [ "https://mathematica.stackexchange.com/questions/104", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7/" ] }
119
One thing I could never wrap my head around is how Flatten works when provided with a matrix as the second argument, and the Mathematica help isn't particularly good on this one. Taken from the Flatten Mathematica documentation: Flatten[list, {{s11, s12, ...}, {s21, s22, ...}, ...}] Flattens list by combining all levels $s_{ij}$ to make each level $i$ in the result. Could someone elaborate on what this actually means/does?
One convenient way to think of Flatten with the second argument is that it performs something like Transpose for ragged (irregular) lists. Here is a simple example: In[63]:= Flatten[{{1,2,3},{4,5},{6,7},{8,9,10}},{{2},{1}}] Out[63]= {{1,4,6,8},{2,5,7,9},{3,10}} What happens is that elements which constituted level 1 in the original list are now constituents at level 2 in the result, and vice versa. This is exactly what Transpose does, but done for irregular lists. Note however, that some information about positions is lost here, so we can not directly inverse the operation: In[65]:= Flatten[{{1,4,6,8},{2,5,7,9},{3,10}},{{2},{1}}] Out[65]= {{1,2,3},{4,5,10},{6,7},{8,9}} To have it reversed correctly, we'd have to do something like this: In[67]:= Flatten/@Flatten[{{1,4,6,8},{2,5,7,9},{3,{},{},10}},{{2},{1}}] Out[67]= {{1,2,3},{4,5},{6,7},{8,9,10}} A more interesting example is when we have deeper nesting: In[68]:= Flatten[{{{1,2,3},{4,5}},{{6,7},{8,9,10}}},{{2},{1},{3}}] Out[68]= {{{1,2,3},{6,7}},{{4,5},{8,9,10}}} Here again, we can see that Flatten effectively worked like (generalized) Transpose , interchanging pieces at the first 2 levels. The following will be harder to understand: In[69]:= Flatten[{{{1, 2, 3}, {4, 5}}, {{6, 7}, {8, 9, 10}}}, {{3}, {1}, {2}}] Out[69]= {{{1, 4}, {6, 8}}, {{2, 5}, {7, 9}}, {{3}, {10}}} The following image illustrates this generalized transpose: We may do it in two consecutive steps: In[72]:= step1 = Flatten[{{{1,2,3},{4,5}},{{6,7},{8,9,10}}},{{1},{3},{2}}] Out[72]= {{{1,4},{2,5},{3}},{{6,8},{7,9},{10}}} In[73]:= step2 = Flatten[step1,{{2},{1},{3}}] Out[73]= {{{1,4},{6,8}},{{2,5},{7,9}},{{3},{10}}} Since the permutation {3,1,2} can be obtained as {1,3,2} followed by {2,1,3} . Another way to see how it works is to use numbers which indicate the position in the list structure: Flatten[{{{111, 112, 113}, {121, 122}}, {{211, 212}, {221, 222, 223}}}, {{3}, {1}, {2}}] (* ==> {{{111, 121}, {211, 221}}, {{112, 122}, {212, 222}}, {{113}, {223}}} *) From this, one can see that in the outermost list (first level), the third index (corresponding the third level of the original list) grows, in each member list (second level) the first element grows per element (corresponding to the first level of the original list), and finally in the innermost (third level) lists, the second index grows, corresponding to the second level in the original list. Generally, if the k-th element of the list passed as second element is {n} , growing the k-th index in the resulting list structure corresponds to increasing the n-th index in the original structure. Finally, one can combine several levels to effectively flatten the sub-levels, like so: In[74]:= Flatten[{{{1,2,3},{4,5}},{{6,7},{8,9,10}}},{{2},{1,3}}] Out[74]= {{1,2,3,6,7},{4,5,8,9,10}}
{ "source": [ "https://mathematica.stackexchange.com/questions/119", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/103/" ] }
121
If I save a notebook, I only save what I see. That is, if I close Mathematica and reopen the notebook later, all previous calculations are lost, except for those things I've output. Now I know that you can save single definitions (and their dependencies) with Save , however is there a way to save the whole kernel state (preferably including internally cached data, e.g. from FullSimplify ) so that when I reload both notebook and kernel state, I can continue to work exactly where I left off? If there's no ready solution for it: From what I understand, the explicit definitions are stored in UpValues , DownValues and OwnValues ; also Options and Attributes affect evaluation, and therefore would have to be saved. However, those all expect an argument specifying a symbol to give information about. Is there any way to get a complete set of them? And would saving those actually suffice, or is there something else needed, too? Also, is there some way to explicitly get at internal cached information (it doesn't need to be in an understandable format, just being able to save and reload it would be sufficient)?
While it is true that you can not save a full state of the kernel, in some cases it may be enough for your purposes to save all symbols' definitions in some context(s), such as Global` (or whatever other contexts are of interest to you). This can be done via DumpSave , like DumpSave["state.mx", "Global`"] The .mx file generated by DumpSave will be platform-specific though. By using Get at some later point, you can reconstruct the values stored in symbols in those contexts you saved: Get["state.mx"] As stated already by @ruebenko, this will not generally fully reconstruct the kernel state. But if you manage to correctly account for all symbols (defined by you) which affect your computations, and depending on the circumstances, this may be enough for many practical purposes.
{ "source": [ "https://mathematica.stackexchange.com/questions/121", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/129/" ] }
128
I like to build sophisticated plots by combining simpler ones with Show[] . Typically this involves setting non-default Plot-Options with the different Plot-Commands, like Show[ ListPlot[ ,Op1], Plot[ ,Op2], Op3] Unfortunately the Show[] command is not commutative, as Show[ Plot[ ,Op2], ListPlot[ ,Op1], Op3] can produce different results. My expectation was that putting settings in Op3 should overwrite the ones in Op1 and Op2 however this does not work with options like PlotMarkers which are only available within ListPlot[] . The description of the Show[g_1, g_2, g_3, ... ,g_i] -command gives only two hints: Options explicitly specified in Show override those included in the graphics expression. and The lists of non-default options in the g_i are concatenated. I’m not sure what this precisely means. Is Show[ ListPlot[ ,Op1], Plot[ ,Op2], Op3] equivalent to Show[ ListPlot[ ,Union[Op1,Op2]], Plot[ ,Union[Op1,Op2]], Op3] ? while Op3 overwrites whatever is in Union[Op1,Op2] ? And there is one more question: In Show[ g_1, g_2, g_3, ..., g_i ] the Plot in g_1 seems to be treated specially as it defines the PlotRange for the final image generated. I would like to know the full set of rules how the Plot-Options are combined and to which Plot or Plots they are applied.
First a little background: All of Mathematica's plotting functions produce a Graphics expression (or Graphics3D , but let's talk about Graphics now). The Graphics expression is simply a representation of what you see in the graphic. You can look at it by converting the output cell to InputForm ( Ctrl - Shift - I ). For example, Plot will produce Graphics with Line s in it. Some of the options to plotting functions are passed on directly to Graphics an affect its appearance (how its contents get rendered). An example is Axes . Some others control what the plotting function will put into the graphics. Examples are PlotStyle or PlotMarkers . These are specific to (and different for) each plotting function. How Show works: It combines several Graphics expressions into one. The returned Graphics expression will inherit its options from the first one passed to Show . In Show you can override some Graphics options directly, but of course this will only override options for Graphics , and not the plotting functions that produced the graphics (as those have already finished running by the time Show sees their output). So Show[ListPlot[... , Op1], Plot[... , Op2], Op3] is equivalent to Show[ListPlot[... , Op3, Op1], Plot[...]] or to Show[ListPlot[...], Plot[...], Op3, Op1] but this is only valid for ListPlot options that are also Graphics options. It is not valid for PlotMarkers . Also note that if the same option is specified several times in the same Graphics , the first one takes precedence. (Thanks J. M.!)
{ "source": [ "https://mathematica.stackexchange.com/questions/128", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/133/" ] }
145
There are commands like NonlinearModelFit[] or NDSolve[] that have the option Method it typically defaults to Automatic . How can you check after the evaluation of the command which method Mathematica picked?
I think you can actually see (most of) what Mathematica is doing by using Trace[..., TraceInternal -> True] . For example, Select[Flatten[ Trace[NDSolve[y'[x] == x && y[0] == 0, y, {x, 0, 6}], TraceInternal -> True]], ! FreeQ[#, Method | NDSolve`MethodData] &] shows the DE was evaluated using NDSolve`LSODA and Newton's method. (I think) And Select[Flatten[ Trace[NDSolve[{Derivative[1][x][t]^2 + x[t]^2 == 1, x[0] == 1/2}, x, {t, 0, 10 Pi}, SolveDelayed -> True], TraceInternal -> True]], ! FreeQ[#, Method | NDSolve`MethodData] &] used NDSolve`IDA . As an aside, here's something I just learnt from Trott's Mathematica guidebook for numerics , to see all of the methods and suboptions for NDSolve {#, First /@ #2} & @@@ Select[{#, Options[#]} & /@ (ToExpression /@ DeleteCases[Names["NDSolve`*"],(* PDE method only *) "NDSolve`MethodOfLines"]), (Last[#] =!= {}) &]
{ "source": [ "https://mathematica.stackexchange.com/questions/145", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/133/" ] }
148
Is it possible in Mathematica to get a step-by-step evaluation of some functions; that's to say, outputting not only the result but all the stages that have led to it? If so, how does one do it? Example : Let's say I want to know the steps to get the derivative of $\cos x\times\exp x$; it should first tell me that it's equal to $\frac{d}{dx}(\exp x)\times\cos x+\exp x \times \frac{d}{dx}(\cos x)$ and then render the result to say $\exp{x}\times(\cos x-\sin x)$.
For differentiation at least, old versions of Mathematica had a demonstration function called WalkD[] that holds your hand and shows what is done at each stage up until the final answer. In general, however... You should realize at the outset that while knowing about the internals of Mathematica may be of intellectual interest, it is usually much less important in practice than you might at first suppose. Indeed, one of the main points of Mathematica is that it provides an environment where you can perform mathematical and other operations without having to think in detail about how these operations are actually carried out inside your computer. ... Particularly in more advanced applications of Mathematica , it may sometimes seem worthwhile to try to analyze internal algorithms in order to predict which way of doing a given computation will be the most efficient. And there are indeed occasionally major improvements that you will be able to make in specific computations as a result of such analyses. But most often the analyses will not be worthwhile. For the internals of Mathematica are quite complicated, and even given a basic description of the algorithm used for a particular purpose, it is usually extremely difficult to reach a reliable conclusion about how the detailed implementation of this algorithm will actually behave in particular circumstances. A typical problem is that Mathematica has many internal optimizations, and the efficiency of a computation can be greatly affected by whether the details of the computation do or do not allow a given internal optimization to be used. Put another way: how Mathematica does things doesn't necessarily correspond to "manual" methods. Here's my modest attempt to (somewhat) modernize WalkD[] : Format[d[f_, x_], TraditionalForm] := DisplayForm[RowBox[{FractionBox["\[DifferentialD]", RowBox[{"\[DifferentialD]", x}]], f}]]; SpecificRules = {d[(f_)[u___, x_, v___], x_] /; FreeQ[{u}, x] && FreeQ[{v}, x] :> D[f[u, x, v], x], d[(a_)^(x_), x_] :> D[a^x, x] /; FreeQ[a, x]}; ConstantRule = d[c_, x_] :> 0 /; FreeQ[c, x]; LinearityRule = {d[f_ + g_, x_] :> d[f, x] + d[g, x], d[c_ f_, x_] :> c d[f, x] /; FreeQ[c, x]}; PowerRule = {d[x_, x_] :> 1, d[(x_)^(a_), x_] :> a*x^(a - 1) /; FreeQ[a, x]}; ProductRule = d[f_ g_, x_] :> d[f, x] g + f d[g, x]; QuotientRule = d[(f_)/(g_), x_] :> (d[f, x]*g - f*d[g, x])/g^2; InverseFunctionRule = d[InverseFunction[f_][x_], x_] :> 1/f'[InverseFunction[f][x]]; ChainRule = {d[(f_)^(a_), x_] :> a*f^(a - 1)*d[f, x] /; FreeQ[a, x], d[(a_)^(f_), x_] :> Log[a]*a^f*d[f, x] /; FreeQ[a, x], d[(f_)[g__], x_] /; ! FreeQ[{g}, x] :> (Derivative[##][f][g] & @@@ IdentityMatrix[Length[{g}]]).(d[#, x] & /@ {g}), d[(f_)^(g_), x_] :> f^g*d[g*Log[f], x]}; $RuleNames = {"Specific Rules", "Constant Rule", "Linearity Rule", "Power Rule", "Product Rule", "Quotient Rule", "Inverse Function Rule", "Chain Rule"}; displayStart[expr_] := CellPrint[ Cell[BoxData[MakeBoxes[HoldForm[expr], TraditionalForm]], "Output", Evaluatable -> False, CellMargins -> {{Inherited, Inherited}, {10, 10}}, CellFrame -> False, CellEditDuplicate -> False]] displayDerivative[expr_, k_Integer] := CellPrint[ Cell[BoxData[TooltipBox[RowBox[{InterpretationBox["=", Sequence[]], " ", MakeBoxes[HoldForm[expr], TraditionalForm]}], $RuleNames[[k]], LabelStyle -> "TextStyling"]], "Output", Evaluatable -> False, CellMargins -> {{Inherited, Inherited}, {10, 10}}, CellFrame -> False, CellEditDuplicate -> False]] WalkD[f_, x_] := Module[{derivative, oldderivative, k}, derivative = d[f, x]; displayStart[derivative]; While[! FreeQ[derivative, d], oldderivative = derivative; k = 0; While[oldderivative == derivative, k++; derivative = derivative /. ToExpression[StringReplace[$RuleNames[[k]], " " -> ""]]]; displayDerivative[derivative, k]]; D[f, x]] I've tried to make the formatting of the derivative look a bit more traditional, as well as having the differentiation rule used be a tooltip instead of an explicitly generated cell (thus combining the best features of WalkD[] and RunD[] ); you'll only see the name of the differentiation rule used if you mouseover the corresponding expression.
{ "source": [ "https://mathematica.stackexchange.com/questions/148", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/44/" ] }
161
Mathematica's Import command is purportedly able to import .AVI files. However, I find that many files that I want to import do only result in MMA showing a blank frame or another placeholder. All or some of my video-players are able to show these AVI files. This is an example where it works as advertised: Import["http://er.jsc.nasa.gov/seh/jfkrice.avi", {"ImageList", 2}] However, Import["http://people.sc.fsu.edu/~jburkardt/data/avi/ccvt_box.avi", {"ImageList", 2}] doesn't work (VLC does). I get an empty white frame. More examples: Import["http://people.sc.fsu.edu/~jburkardt/data/avi/sine_wave.avi", {"ImageList", 2}] (works on VLC and mediaplayer classic) Import["http://people.sc.fsu.edu/~jburkardt/data/avi/star_collapse.avi", {"ImageList", 2}] BIG-126 MB!! Should have looked like this: and works on Windows Media Player, VLC and mediaplayer classic, but I get a white box in MMA. This is on windows 7-64/MMA8.04. I have the k-lite codecs pack installed. UPDATE Responding to Thomas' comment below I found most of the sample files I linked to in my original post gone. I tried to gather a new set and found some that worked and some that don't: Importable: Import[#, "VideoEncoding"] & /@ {"http://er.jsc.nasa.gov/seh/jfkrice.avi", "http://redmine.yorba.org/attachments/615/MVI_0572.AVI", "http://www-eng-x.llnl.gov/documents/a_video.avi", "http://redmine.yorba.org/attachments/628/MVI_4981.AVI", "http://www.csoft.co.uk/video/original/earth.avi", "http://www.mysticfractal.com/video/fp.avi", "http://www.softage.ru/files/video-codec/uncompressed/suzie.avi", "http://archive.org/download/Architects_of_Tomorrow/2007-12-10-02-39-00.avi"} {"msvc", "MJPG", "msvc", "MJPG", "msvc", "msvc", "YUV", "Uncompressed"} Don't import: Import[#, "VideoEncoding"] & /@ {"https://code.ros.org/trac/opencv/export/7213/trunk/opencv/samples/cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video/Megamind.avi", "http://samples.mplayerhq.hu/avi/verona60avi56k.avi", "http://samples.mplayerhq.hu/avi/filedoesitbetter.avi", "http://www.infognition.com/ScreenPressor/browsing-divx.avi" } {"XVID", "MP42", "MJPG", "DX50"}
64-bit Windows only Note for Mathematica 11.3: There is a potential conflict between MathMF and the built-in MediaTools package. See here for details and here for an example of how to use MediaTools in place of MathMF . Note for Mathematica version 10: The Wolfram Library has been updated in version 10 and you will need to recompile the MathMF DLL. This is most easily accomplished by evaluating "MathMF"//FindLibrary//DeleteFile prior to loading the package. Link to package on GitHub I have written a package called MathMF which uses a LibraryLink DLL to do frame-by-frame video import and export with Windows Media Foundation. It should be able to read a reasonable variety of movie files, including AVI, WMV and MP4. Exporting is currently limited to WMV and MP4 formats (AVI encoding is not natively supported by Media Foundation) Here is the sort of code you can write with it. The code first opens a video file for reading, and creates a new video file for writing to. It then runs a loop in which each frame is sequentially read from the input stream, processed in Mathematica and then written to the output stream. So Mathematica is effectively being used as a video filter. {duration, framerate, width, height} = MFInitSourceReader["C:\\Users\\Simon\\Desktop\\test1.wmv"]; MFInitSinkWriter["C:\\Users\\Simon\\Desktop\\filtered.wmv", width, height, "FrameRate" -> framerate] While[ (image = MFGrabFrame["RealImage"]) =!= EndOfFile, MFSendFrame @ GradientFilter[image, 2] ] ~Monitor~ image MFFinaliseSink[] The package can be downloaded from the GitHub link at the top of this post, it is too large to include in full here. The package includes the library source code, and on first use will attempt to compile the library locally. I believe the compilation should work if you have Visual Studio 2010 or later installed, and probably won't work if you use a different compiler. There is a pre-built DLL available if the compilation fails (see the readme on GitHub for more details) I hope some people find this useful, it has been hovering in my mind as something to try to do for quite some time, hindered mainly by my total lack of experience with C++ and COM programming.
{ "source": [ "https://mathematica.stackexchange.com/questions/161", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/57/" ] }
164
Is there any way in Mathematica to find out the currently free memory on the system Mathematica runs on (like the utility free shows on the command line)? I've found out functions to show the memory occupied by Mathematica itself, but of course there are also other programs running on the system, taking their share of memory, so that number is not sufficient to estimate the free memory. The background is that currently if I do something which might fill up my memory (and I don't forget to do it), I call free by hand, subtract a safety margin, and then use MemoryConstrained in order to prevent the memory to get completely filled up (with quite unpleasant consequences). I'd like to automate that. While I certainly could call free from Mathematica and parse its output for the number, I'd like to avoid that if I can (who knows if the next system update makes subtle changes to free and then the parsing fails to give the correct number).
You might be able to use JLink along with some undocumented behaviour of the Java class java.lang.management.ManagementFactory to get the information you seek: Needs["JLink`"] InstallJava[]; LoadJavaClass["java.lang.management.ManagementFactory"]; JavaBlock[ {#, java`lang`management`ManagementFactory`getOperatingSystemMXBean[]@#[]} & /@ { getName , getArch , getVersion , getCommittedVirtualMemorySize , getFreePhysicalMemorySize , getFreeSwapSpaceSize , getTotalPhysicalMemorySize , getTotalSwapSpaceSize , getProcessCpuTime , getAvailableProcessors , getSystemLoadAverage } // Grid ] This works on Windows 7 (Mathematica 8, 64-bit): Out[368]= getName Windows Vista getArch amd64 getVersion 6.1 getCommittedVirtualMemorySize 102449152 getFreePhysicalMemorySize 5997510656 getFreeSwapSpaceSize 14498115584 getTotalPhysicalMemorySize 8587284480 getTotalSwapSpaceSize 17172676608 getProcessCpuTime 6068438900 getAvailableProcessors 4 getSystemLoadAverage -1. I don't have Mac or Linux boxes to hand at the moment to test whether it works there as well.
{ "source": [ "https://mathematica.stackexchange.com/questions/164", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/129/" ] }
167
What are the best (most robust and most convenient) ways to create palettes that can be installed permanently (using Palettes -> Install Palette... ) and are safe to use? I'd be interested in how other people have done this in the past to learn more about idiomatic front end programming. I put some code illustrating the pattern I am using now at the end of this post. I'd appreciate some comments on it. Notes and requirements: The palette should always work, regardless of whether the kernel has been quit or an evaluation is running. The palette may have more than one button which share some code between them. It must not pollute the Global` context or change the kernel state in a way that might break something unexpectedly. I'm looking for an easy way to define palettes (minimal boilerplate code and extra work) It would be nice (non-essential) if several versions of the palette could coexist independently (my current approach doesn't have this because it uses its own context to hide its function definition, but everything in this context is shared) It would also be nice (non-essential) to integrate documentation in an easy way (help button bringing it up maybe?) My current approach is illustrated below. It "localizes" its symbols by putting them in a separate context, and uses DynamicModule to ensure that all the definitions are done before any button code is run. SetAttributes[paletteButton, HoldAll] paletteButton[name_, tooltip_, func_, opt : OptionsPattern[]] := Tooltip[Button[name, Unevaluated[func], Appearance -> "Palette", opt], tooltip, TooltipDelay -> Automatic] Begin["SomePalette`"]; PaletteNotebook[ DynamicModule[{}, Column[{ paletteButton["One", "Button one", function[1]], paletteButton["Two", "Button two", function[2]] }], Initialization :> ( function[x_] := MessageDialog[x] ) ], WindowTitle -> "Some Palette" ] End[];
All palette state (i.e., variables which affect the palette and should be remembered between sessions) should be vectored through the palette's TaggingRules option, and its initialization should be done in the palette's NotebookDynamicExpression option. That, plus context isolation of any kernel functions you need to define should solve all of the points you raise, excepting the documentation issue. An example palette which demonstrates these principles: CreatePalette[ Column[{Button["Print opener state", MyPalette`Private`DoSomething[ "The opener is " <> If[CurrentValue[EvaluationNotebook[], {TaggingRules, "opener"}], "open", "closed"]]], OpenerView[{"Group of buttons", Column[{Button[1], Button[2]}]}, Dynamic[CurrentValue[ EvaluationNotebook[], {TaggingRules, "opener"}, False]]]}], NotebookDynamicExpression :> Refresh[MyPalette`Private`DoSomething[MyPalette`Private`x_] := Print[MyPalette`Private`x], None]] Let's hit the items raised in this code one by one... The palette uses a kernel-defined function which is in NotebookDynamicExpression . The code is wrapped in Refresh[_,None] to ensure that it evaluates once only when the notebook is opened. The code is context isolated by hand. Note that Begin and End won't work here, although they would work inside of a package, or if you wrapped the code in ToExpression (e.g., Begin["foo`"];ToExpression["code"];End[] ). A palette-wide state variable is stored in the palette's TaggingRules , which can be accessed by using CurrentValue[EvaluationNotebook[],{TaggingRules,"opener"}] . Because "opener" is a string, no symbols are introduced into any context. State variables will typically need to be initialized. I could do that in various standard ways, but I used the undocumented third argument to CurrentValue which sets it to False if it doesn't already have a value. Once the palette is installed, the TaggingRules setting will persist between instances of the palette, even if you quit Mathematica. Mathematica automatically serializes an installed palette's TaggingRules settings when you close it by storing the value into the global option PalettesMenuSettings . If you have multiple versions of the palette open, they'll each operate using independent state variables because the state variable is attached to the palette notebook. If multiple versions of the palette are installed under different names then the PalettesMenuSettings trick will store the TaggingRules separately.
{ "source": [ "https://mathematica.stackexchange.com/questions/167", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
180
Say I have a list x={2,4,6,8,10} and I want to find out the positions of the elements that are greater than 7. Select[x, #>7&] gives the elements themselves, and Position[x,8] gives the position of the elements satisfying one of the possible criteria, but what I am looking for would be a mix of the two returning {4,5} . Any suggestions?
Position[{2, 4, 6, 8, 10}, _?(# > 7 &)] does the job. Apply Flatten[] if need be. As noted by Dan in a comment alluding to Brett's answer, using the level-specification argument of Position[] might sometimes be needed, if the numbers are not Real s, Rational s or Integer s.
{ "source": [ "https://mathematica.stackexchange.com/questions/180", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/140/" ] }
189
Say I have a matrix m and a vector v. v = {a[1], a[2], a[3], a[4]}; m={{a[1,1],a[1,2],a[1,3],a[1,4]}, {a[2,1],a[2,2],a[2,3],a[2,4]}, {a[3,1],a[3,2],a[3,3],a[3,4]}, {a[4,1],a[4,2],a[4,3],a[4,4]}}; What is the most efficient way to add v to the rows of m to create... mrow={{a[1],a[2],a[3],a[4]}, {a[1,1],a[1,2],a[1,3],a[1,4]}, {a[2,1],a[2,2],a[2,3],a[2,4]}, {a[3,1],a[3,2],a[3,3],a[3,4]}, {a[4,1],a[4,2],a[4,3],a[4,4]}} Likewise, what is the most efficient way to add v to the columns of m to create... mcol={{a[1],a[1,1],a[1,2],a[1,3],a[1,4]}, {a[2],a[2,1],a[2,2],a[2,3],a[2,4]}, {a[3],a[3,1],a[3,2],a[3,3],a[3,4]}, {a[4],a[4,1],a[4,2],a[4,3],a[4,4]}}; EDIT : I've tested some of the suggestions for adding a column with a large matrix and was somewhat surprised by the results. m = RandomVariate[NormalDistribution[], {1000, 1000}]; v = RandomVariate[NormalDistribution[], 1000]; In[37]:= AbsoluteTiming[Do[MapThread[Prepend, {m, v}], {100}];] Out[37]= {1.809623, Null} In[38]:= AbsoluteTiming[Do[Transpose[Prepend[Transpose[m], v]], {100}];] Out[38]= {2.449231, Null} In[39]:= AbsoluteTiming[Do[Transpose[Join[Transpose[m], {v}]], {100}];] Out[39]= {2.271853, Null}
ArrayFlatten is much faster than combination of Join and Transpose : m = RandomVariate[NormalDistribution[], {1000, 1000}]; v = RandomVariate[NormalDistribution[], 1000]; Check that ArrayFlatten gives the same output: (* In[54]:=*) ArrayFlatten[{{Transpose[{v}], m}}] == Transpose[Join[{v}, Transpose[m]]] (* Out[54]= True *) (* In[57]:= *) ArrayFlatten[{{Transpose[{v}], m}}] == MapThread[Prepend, {m, v}] (* Out[57]= True *) See the timing: (* In[55]:= *) Do[ ArrayFlatten[{{Transpose[{v}], m}}], {10^3}] // AbsoluteTiming (* Out[55]= {4.330433, Null} *) (* In[58]:= *) Do[MapThread[Prepend, {m, v}], {10^3}] // AbsoluteTiming (* Out[58]= {11.766177, Null} *) (* In[56]:= *) Do[ Transpose[Join[{v}, Transpose[m]]], {10^3}] // AbsoluteTiming (* Out[56]= {16.700670, Null} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/189", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/43/" ] }
197
Suppose I have a few symbols, one of which has a value: {abc1, abc2 = 5, abc3}; I can use Names to get the list of names, as strings: InputForm[names = Names["Global`abc*"]] (* {"abc1", "abc2", "abc3"} *) Now I want to find which symbols have values. This fails, because ValueQ expects the first argument to be a Symbol , not a String : Select[names, ValueQ] (* {} *) This fails (with lots of messages), because ValueQ doesn't evaluate the argument enough: Cases[names, st_ /; ValueQ[Symbol[st]]] (* {"abc1", "abc2", "abc3"} *) If we force evaluation, we go too far, and this fails because we get ValueQ[5] instead of ValueQ[abc2] : Cases[names, st_ /; ValueQ[Evaluate[Symbol[st]]]] (* {} *) This approach works, but is far from elegant: Cases[names, st_ /; ToExpression["ValueQ[" <> st <> "]"]] (* "abc2" *)
I usually use ToExpression["symbol", InputForm, ValueQ] ToExpression will wrap the result in its 3rd argument before evaluating it. Generally, all functions that extract parts ( Extract , Level , etc.) have such an argument. This is useful when extracting parts of held expressions. ToExpression acts on strings or boxes, but both the problem with evaluation control and the solution is the same. I thought this was worth mentioning here.
{ "source": [ "https://mathematica.stackexchange.com/questions/197", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/69/" ] }
198
Is it possible? Is it platform specific? Does it rely on the graphics hardware? Why does the antialiasing slider under Preferences > Appearance > Graphics do nothing? I remember seeing some post-plotting solutions years before in MathGroup, but could not find it. Edit System is: HP EliteBook 8440p, integrated Intel HD Graphics, running Windows 7, 64bit
This needs specific support from your graphics card. My own graphics card is very old, and does not support it, so the slider does nothing on my machine. But the good news is that there are workarounds, and I even made an antialiasing palette (code at the end of the post -- evaluate it, pop out the palette, and if you prefer, save it using Palettes -> Install Palette... ). This is the core antialiasing function I use: antialias[g_, n_: 3] := ImageResize[Rasterize[g, "Image", ImageResolution -> n 72], Scaled[1/n]] It simply renders a large image, and it downscales it. The results can be better than with a better graphics card's built-in antialiasing, so it's worth a look even if you have a good graphics card. Problems with this method: Fonts can be blurrier than what you'd like With a high scaling factor, it may expose bugs in your graphics driver, and show some unusual results (I had problems with opacity in more complex graphics) Tick marks don't scale properly (I think this is a bug), so they are barely visible on the antialiased version. This is the palette code. Usage: select a 3D graphic and press the button. It'll insert an antialiased image below. Begin["AA`"]; PaletteNotebook[DynamicModule[ {n = 3}, Column[{ SetterBar[ Dynamic[n], {2 -> "2\[Times]", 3 -> "3\[Times]", 4 -> "4\[Times]", 6 -> "6\[Times]"}, Appearance -> "Palette"], Tooltip[ Button["Antialias", antialiasSelection[SelectedNotebook[], n], Appearance -> "Palette"], "Antialias selected graphics using the chosen scaling factor.\nA single 2D or 3D graphics box must be selected."] }], Initialization :> ( antialias[g_, n_Integer: 3] := ImageResize[Rasterize[g, "Image", ImageResolution -> n 72], Scaled[1/n]]; antialiasSelection[doc_, n_] := Module[{selection, result}, selection = NotebookRead[doc]; If[MatchQ[selection, _GraphicsBox | _Graphics3DBox], result = ToBoxes@Image[antialias[ToExpression[selection], n], Magnification -> 1]; SelectionMove[doc, After, Cell]; NotebookWrite[doc, result], Beep[] ] ] ) ], TooltipBoxOptions -> {TooltipDelay -> Automatic}, WindowTitle -> "Antialiasing" ] End[]; Demonstration:
{ "source": [ "https://mathematica.stackexchange.com/questions/198", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
237
Inspired by this answer , I am interested to know if there are best practices or rules of thumb for constructing patterns, for example for use in function definitions ( f[x_ Pattern ] or f[x: Pattern ] ), replacement rules ( expr /. x: Pattern :> y ), or functions that use patterns as arguments (e.g. Cases ). Leonid mentions that "syntactic patterns", without Condition or PatternTest elements, are faster for things like Cases . Is this a general observation for all pattern-matching constructs in Mathematica, or is it specific to Cases ? Is there a definition of a syntactic pattern as opposed to other kinds? And if the test in question requires "slower" constructs like BlankNullSequence ( ___ ), are there workarounds or optimisations that are effective, or is it not worth the time spent optimising code?
Some pitfalls in pattern-construction You should ask several questions: Will your pattern involve frequent invocation of the evaluator (this happens if it contains Condition and / or PatternTest , and is tested many times). If yes, this will slow down the pattern-matcher. Here is an example taken from this answer randomString[]:=FromCharacterCode@RandomInteger[{97,122},5]; rstest = Table[randomString[],{1000000}]; In[102]:= MatchQ[rstest,{__String}]//Timing Out[102]= {0.047,True} In[103]:= MatchQ[rstest,{__?StringQ}]//Timing Out[103]= {0.234,True} Will your pattern make the pattern-matcher perform many a-priori doomed matching attempts (and thus, underutilize the runs of the pattern-matcher)? If yes, this will slow it down a lot . Patterns with BlankSequence or BlankNullSequence are notorious for that, particularly in combination with ReplaceRepeated . For example, this list sorting is very inefficient: list//.{left___,x_,y_,right___}/;x>y:>{left,y,x,right} However, there are cases where such patterns are very efficient as well, such as in this answer . Will your pattern lead to excessive copying of parts? This happens also for patterns like x___ , because the rule like {x_,y___}:>{y} will copy the entire sequence (array) y during the match. This is because lists are implemented as arrays in Mathematica. As in example here, consider the following implementation of mergeSort, taken from my answer in this thread : Clear[merge]; merge[x_List, y_List] := Block[{merge}, Flatten[merge[x, y] //. { merge[{a_, b___}, {c_, d___}] :> If[a < c, {a, merge[{b}, {c, d}]}, {c, merge[{a, b}, {d}]} ], merge[{}, {a__}] :> {a}, merge[{a__}, {}] :> {a}}]] This one is very slow. The detailed analysis is in the same answer I linked to, but here is the version based exclusively on ReplaceRepeated , but made efficient because it uses linked lists: Clear[toLinkedList]; toLinkedList[x_List] := Fold[{#2, #1} &, {}, Reverse[x]]; Module[{h, lrev}, mergeLinked[x_h, y_h] := Last[{x, y, h[]} //. { {fst : h[hA_, tA_h], sec : h[hB_, tB_h], e_h} :> If[hA > hB, {tA, sec, h[hA, e]}, {fst, tB, h[hB, e]}], {fst : h[hA_, tA_h], h[], e_h} :> {tA, h[], h[hA, e]}, {h[], sec : h[hB_, tB_h], e_h} :> {h[], tB, h[hB, e]}}]; lrev[set_] := Last[h[set, h[]] //. h[h[hd_, tl_h], acc_h] :> h[tl, h[hd, acc]]]; sort[lst_List] := Flatten[Map[h[#, h[]] &, lst] //. x_List :> Flatten[{toLinkedList@x, {}} //. {{hd1_, {hd2_, tail_List}}, accum_List} :> {tail, {accum, lrev@mergeLinked[hd1, hd2]}}], Infinity, h]]; Just only due to the use of linked lists and resulting from them memory/run-time savings, this implementation recovers the correct n log n asymptotic complexity of the merge sort angorithm, even though ReplaceRepeated is used all over. The benchmarks can be found in the quoted post. Does your pattern lead to accidental unpacking of packed arrays, even when that is not necessary? This can slow things down significantly. In this answer , I discussed some possible work-arounds to avoid such situtations. Summary and recommendations: Be careful with __ and ___ Be careful with ReplaceRepeated Try to construct patterns such as to minimize failed pattern-matching attempts. Avoid Condition and PatternTest whenever possible, and use syntactic patterns Watch out for unpacking during the pattern-matcher In place of __ and ___ , try using linked lists when you can
{ "source": [ "https://mathematica.stackexchange.com/questions/237", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8/" ] }
246
I have the following images, both 480 x 270 pixels: I'd like to stitch them side by side in a 960 x 270 pixel image like so (scaled down): So far, I've been using GraphicsGrid to try to achieve this: c1 = Import["http://i.stack.imgur.com/2SRcD.png"]; c2 = Import["http://i.stack.imgur.com/zL8id.png"]; (* Frame parameter set here to emphasize next point. *) g = GraphicsGrid[{{c1, c2}}, Frame -> True] But there's padding between the images: Additionally, it's scaled incorrectly, as ImageDimensions shows: ImageDimensions@g (* Expect {960, 540} *) {360, 180} I can use the ImageSize parameter to explicitly set the size of the output image, but that just reproduces the above result to scale. How can combine the images side-by-side without scaling or padding?
What you're looking for is ImageAssemble : c1 = Import["http://i.stack.imgur.com/2SRcD.png"]; c2 = Import["http://i.stack.imgur.com/zL8id.png"]; ImageAssemble[{c1, c2}] It can also assemble vertically and horizontally: ImageAssemble[{{c1, c1}, {c2, c2}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/246", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/108/" ] }
275
Stan Wagon's Mathematica in Action (second edition; I haven't read the third edition and I'm hoping to eventually see it), demonstrates a nifty function called FindAllCrossings2D[] . What the function basically does is to augment FindRoot[] by using ContourPlot[] to find crossings that FindRoot[] can subsequently polish. Here , Wagon uses the function to assist in solving one of the questions of the SIAM hundred-digit challenge. ContourPlot[] changed quite a bit starting from version 6 (e.g., it now outputs GraphicsComplex[] objects), and FilterRules[] has superseded the old standby FilterOptions[] With these in mind, I set out to update FindAllCrossings2D[] : Options[FindAllCrossings2D] = Sort[Join[Options[FindRoot], {MaxRecursion -> Automatic, PerformanceGoal :> $PerformanceGoal, PlotPoints -> Automatic}]]; FindAllCrossings2D[funcs_, {x_, xmin_, xmax_}, {y_, ymin_, ymax_}, opts___] := Module[{contourData, seeds, tt, fy = Compile[{x, y}, Evaluate[funcs[[2]]]]}, contourData = Map[First, Cases[ Normal[ ContourPlot[funcs[[1]], {x, xmin, xmax}, {y, ymin, ymax}, Contours -> {0}, ContourShading -> False, PlotRange -> {Full, Full, Automatic}, Evaluate[ Sequence @@ FilterRules[Join[{opts}, Options[FindAllCrossings2D]], DeleteCases[Options[ContourPlot], Method -> _]]] ]], _Line, Infinity]]; seeds = Flatten[Map[#[[ 1 + Flatten[Position[Rest[tt = Sign[Apply[fy, #, 2]]] Most[tt], -1]] ]] &, contourData], 1]; If[seeds == {}, seeds, Select[ Union[Map[{x, y} /. FindRoot[{funcs[[1]] == 0, funcs[[2]] == 0}, {x, #[[1]]}, {y, #[[2]]}, Evaluate[ Sequence @@ FilterRules[Join[{opts}, Options[FindAllCrossings2D]], Options[FindRoot]]]] &, seeds]], (xmin < #[[1]] < xmax && ymin < #[[2]] < ymax) &]]] The function works splendidly, it seems. I tried out the same example Wagon used in his book: f[x_, y_] := -Cos[y] + 2 y Cos[y^2] Cos[2 x]; g[x_, y_] := -Sin[x] + 2 Sin[y^2] Sin[2 x]; pts = FindAllCrossings2D[{f[x, y], g[x, y]}, {x, -7/2, 4}, {y, -9/5, 21/5}, Method -> {"Newton", "StepControl" -> "LineSearch"}, PlotPoints -> 85, WorkingPrecision -> 20] // Chop; ContourPlot[{f[x, y], g[x, y]}, {x, -7/2, 4}, {y, -9/5, 21/5}, Contours -> {0}, ContourShading -> False, Epilog -> {AbsolutePointSize[6], Red, Point /@ pts}] Whew, that preamble was quite long. Here's my question, then: Are there "neater" (for some definition of "neater") ways to update/reimplement FindAllCrossings2D[] than my attempt?
Here is my latest code for this function, from Chapter 12 of the third edition of "Mathematica in Action". It is pretty short, but I will let you work out if it is faster or more robust than yours. Note the PlotPoints option for difficult cases. FindRoots2D::usage = "FindRoots2D[funcs,{x,a,b},{y,c,d}] finds all nontangential solutions to {f=0, g=0} in the given rectangle."; Options[FindRoots2D] = {PlotPoints -> Automatic, MaxRecursion -> Automatic}; FindRoots2D[funcs_, {x_, a_, b_}, {y_, c_, d_}, opts___] := Module[ {fZero, seeds, signs, fy}, fy = Compile[{x, y}, Evaluate[funcs[[2]]]]; fZero = Cases[Normal[ ContourPlot[ funcs[[1]] == 0, {x, a-(b-a)/97, b+(b-a)/103}, {y, c-(d-c)/98, d+(d-c)/102}, Evaluate[FilterRules[{opts}, Options[ContourPlot]]]]], Line[z_] :> z, Infinity]; seeds = Flatten[( (signs = Sign[Apply[fy, #1, {1}]]; #1[[1 + Flatten[Position[Rest[signs*RotateRight[signs]], -1]]]]) & ) /@ fZero, 1]; If[seeds == {}, {}, Select[ Union[({x, y} /. FindRoot[{funcs[[1]], funcs[[2]]}, {x, #1[[1]]}, {y, #1[[2]]}, Evaluate[FilterRules[{opts}, Options[FindRoot]]]] & ) /@ seeds, SameTest -> (Norm[#1 - #2] < 10^(-6) & )], a <= #1[[1]] <= b && c <= #1[[2]] <= d & ]]]
{ "source": [ "https://mathematica.stackexchange.com/questions/275", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/50/" ] }
290
This might be of general interest – I have different questions regarding naming conventions, contexts, subcontexts, shadowing, etc., but I do feel that they are closely related, thus I don't really want to split this post into three. Naming What is the best method to name the package file, the package context, the directory of package files (or a more complex hierarchy of these files)? Which of these names must be the same? I got confused several times before, and though I always manage to solve a situation, I don't feel like I have a good understanding on how these things work. Shadowing When there are interrelated packages with different contexts where some symbols appear in all of these contexts then - when calling contexts in the same session - usually shadowing messages appear. This is useful, when such symbols have different definitions and are unintentionally named the same way, but not, in the following case. If someone has a newly introduced function option , like Verbose , which doesn't have an OwnValue , then it is totally unnecessary to invoke shadowing messages, as no call of Verbose could do any harm. There still might be difference in the overall description of Verbose in two packages (even when all OwnValues , DownValues , etc. are the same), for example their usage messages might differ, as different functions would utilize the Verbose option in the different packages. What is the best way to deal with these things? Should a Common.m package be introduced, and all the related packages be moved under a common context-name and/or directory? Do they have to be in the same directory? Grouping and sub-contexts Following point 2, when is it useful to introduce sub-contexts (e.g. myContext`format` and myContext`content` )? Should these be split into different files? How should these files be named? Is it necessary then to include a Common.m too or is it just for convenience? What should be kept in Common.m ?
Part of what you are asking is of course a matter of taste and habits, but here are my 2 cents: 1) if you want Mathematica to find your package files with a Needs or Get their context names must agree with the hierarchy of directories and filenames. I don't see any good reasons to diverge from that standard convention. For complex packages with many files you will also typically have a Kernel-subdirectory with an init.m , but I think these things are relatively well documented. 2) My personal opinion is that using symbols for option names is asking for exactly these kind of problems. Obviously at least some of the WRI personnel thinks the same, since in later versions there are more and more options that accept strings as names and the new way to work with options also full supports this. If you are worried about cluttering your code with too many pairs of "", note that this will work alright: Options[f] = {"Verbose" -> False} f[OptionsPattern[]] := (If[OptionValue["Verbose"], Print["I'm so verbose!"]]; RandomReal[]) f[Verbose -> True] or even: f[someothercontext`Verbose -> True] What you loose is the possibility to have a usage message bound to the option name, but as you have noticed if there are more than one function using the same option name, the usage message is of limited use anyway and the details must be explained in the documentation of the function, not the option. WRI has the same problem, obviously: At least I don't think that this usage is of very much help: ?Method Method is an option for various algorithm-intensive functions that specifies what internal methods they should use. 3) Introducing sub-contexts is useful when things get more complex and parts of the whole can be split up in more or less independent parts. Of course giving these parts names that make it easy to recognize what they provide is a good idea, but I think that's so obvious that I doubt I fully understand that part of your question. If you want these parts to be loadable without the other parts, you must split them in different files, otherwise it's up to you from the technical viewpoint. From the code organization point of view I would think that if it makes sense to split your packages in separate contexts, it usually is also a good idea to split them into separate files. That becomes even more important if several people work on the various parts, but I feel there is not much Mathematica code written that way (except within WRI). Of course it's not necessary to include a Common.m, but as you have mentioned it's a good approach to collect all symbols that the various parts share into one common context/file, and Common.m ( myPackage`Common` ) is a common convention that is also used by WRI, so I'd stick with it. On the other hand I would consider it as a good design of your package when you don't need a Common.m, since then you obviously managed to really split your package in independent parts.
{ "source": [ "https://mathematica.stackexchange.com/questions/290", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
296
I'm using Raster to create large graphics (1000 by 1666), which I want to Export as a JPEG or TIFF in a resolution that can be printed in a large size, 3' by 5'. I'm lost in the maze of options available, and my exported files are always at very low resolution.
First, some clarifications: pure image size : pixels × pixels Mathematica ImageSize : the distance x distance of the image, defined as a multiple of 1/72'', which is approximately 0.353 mm (1/72'' is called a printer's point ) printing size : distance × distance on the printing media printer resolution (dpi) : depends on printer characteristics; it's the number of small drops it can place in a linear inch; nowadays, above 1000 dpi; generally, due to the mechanical characteristics of the printer, it can put more points in a "horizontal" line than in a vertical line; some printers allow this value to be changed for ink economy. image resolution printing output (dpi) : the amount of pixels defined on the image that will be placed in a 1 inch row or column, on the sheet of paper Mathematica ImageResolution : a way to specify how many pixels to generate on the exported file, etc. So, let's suppose you have a 1000 × 1000 pixels image file. What is the resolution of your image? If it is not shown, neither on the screen, nor on paper, it has no resolution (sure your image software can register in a file a specific value for the resolution, but this value has absolutely no impact on your image, see it as something like the date on a digital photograph.) (I'm sure someone from the "printing" business would not agree with some of the words I used, but let's use this as my personal practical definition) If you print that 1000 × 1000 pixels image on a 4" × 4" size paper, it will have a image resolution printing output of 1000/4"= 250 dpi. If you printed with a printer resolution of 2500 × 2500 dpi, then your printer placed 10 × 10 drops of ink to render each pixel of your digital image (this also allows for a correct color rendering from just 3 or 5 different ink recipients). If you display the same 1000 × 1000 image on your screen, with a 100 % zoom (where each pixel of the image occupies exactly one pixel on your screen), most likely, your image will measure 1000/72" (if you use a ruler), since most screens have a resolution of 72 dpi (recent laptops may have substantially higher resolution). So, another important question is: What should be my image resolution printing output? When printing, you should have an image resolution printing output adapted to the distance at which the image will be seen. It is common to say that someone with a 20/20 sight is able to distinguish 0.3 to 0.4 arc minutes at maximum. Considering the value 0.3, we could say that, for the printing to be perfect, the image resolution printing output , in dpi, should be (360/(0.3/60))/(2 d*Pi) , where d stands for the distance of viewing (in inches). Nevertheless, I'll add here my personal experience on printed results (where, for obvious reasons, things aren't so perfect as on perfectly geometrical displays): below 150 dpi, you will start to easily see the pixels on your printed support; above 300 dpi, only with very precise printers, and looking closely, will you see a difference to the same image at 300 dpi. Out of curiosity, these practical limits correspond on a perfect vision to a distance between 1 to 2 meters. For your example, a 3' × 5' print, I think that 200 dpi image resolution printing output is more than enough, since it is probably a print that will be observed from a certain distance. Everyone farther than 1,5 m will be beyond physical capability to distinguish pixels; and I would add that up to 1/4th of this distance, the image will still be perfectly acceptable (after-all, we have been living pretty happy with 72/96 dpi displays up to not so long ago...) This means you would need 3 × 12 × 200 by 5 × 12 × 200 = 7200 × 12000 pixels on your file. How to generate this on Mathematica? There are a lot of different ways. I will show you a couple of examples. The following creates an image of 100 "printer points" (horizontal), meaning 100*1/72'', which corresponds to approximately 36 mm. a = Plot[x^2, {x, 0, 1}, ImageSize -> 100] Unfortunately, what that means is a little hard to understand, since the size that image will occupy on your screen is probably not 36 mm. It depends on the Magnification, the difference between your screen true resolution and what Mathematica reads of it (not always match), etc. Nevertheless, if you activate the ruler (Windows->Show Rules), you will see that it matches. So, think of it more like a meta information... The following exports the previously generated image with the default value of ImageResolution , which is 72 dpi. This means that you jpg file will have (100*1/72'')*72 dpi = 100 horizontal pixels. Export["a.jpg", a] The following exports a 200 horizontal pixel size image (it changes your original option, defined on the Plot ) Export["a.jpg", a, ImageSize->200] And with the following you specify the ImageResolution for the Export function. This is probably what you are looking for. Export["a.jpg", a, ImageResolution->200] So, I recommend that you play around with ImageSize , to get a good looking image on your screen (the texts on the correct size, etc), and then Export it specifying the ImageResolution to get the 7200 × 12000 pixels file. (See JPEG specifications to get it into a reasonable file size.)
{ "source": [ "https://mathematica.stackexchange.com/questions/296", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/170/" ] }
313
This is the most common pattern to compute a table of results: Table[function[p], {p, parameters}] (regardless of how it's implemented, it could be a Map ) The problem with this is that if the calculation is interrupted before it's finished, the partial results will be lost. We can do this in a safely interruptible way like so: Do[AppendTo[results, {p, function[p]}], {p, parameters}] If this calculation is interrupted before it's finished, the intermediate results are still preserved. We can easily restart the calculation later, for those parameter values only for which function[] hasn't been run yet. Question: What is the best way to achieve this when running calculations in parallel? Assume that function[] is expensive to calculate and that the calculation time may be different for different parameter values. The parallel jobs must be submitted in a way to make best use of the CPU. The result collection must not be shared between the parallel kernels as it may be a very large variable (i.e. I don't want as many copies of it in memory as there are kernels) Motivation: I need this because I want to be able to make my calculations time constrained. I want to run the function for as many values as possible during the night. In the morning I want to stop it and see what I got, and decide whether to continue or not. Notes: I'm sure people will mention that AppendTo is inefficient and is best avoided in a loop. I think this is not an issue here (considering that the calculations run on the subkernels and function[] is expensive). It was just the simplest way to illustrate the problem. There could be other ways to collect results, e.g. using a linked list, and flattening it out later. Sow / Reap is not applicable here because they don't make it possible to interrupt the calculation. About the long running time: The most expensive part of the calculations I'm running are in C++ and called through LibraryLink, but they still take a very long time to finish.
Regarding using Sow instead of AppendTo, you may find this trick useful: Last[Last[Reap[CheckAbort[Do[Pause[0.1]; Sow[x], {x, 30}], ignored]]]] (Try running this and aborting it partway through. It runs for 3 seconds due to the Pause[0.1] commands.) Do is used instead of Table, and the results are returned with Sow. The CheckAbort catches when you abort your computation partway through and does the useful tidying up (in this case, returning something, anything, to the enclosing Reap). You can combine this with a version of Sow that always run on the master kernel: SetSharedFunction[ParallelSow]; ParallelSow[expr_] := Sow[expr] (Tangentially related blog post I did: http://blog.wolfram.com/2011/04/20/mathematica-qa-sow-reap-and-parallel-programming/ ) Then you could use this parallelized version: In[3]:= Last[ Last[Reap[ CheckAbort[ParallelDo[Pause[0.1]; ParallelSow[x], {x, 30}], ignored]]]] Out[3]= {6, 1, 7, 2, 8, 3, 9, 4, 10, 5, 16, 11, 17, 12, 18, 13, 19, \ 14, 20, 15, 21, 26, 22, 27, 23, 28, 24, 29, 25, 30} However, as you can see, the results come in in an unpredictable order so something slightly cleverer is in order. Here is one way (probably not the best but the first thing I thought of): In[5]:= Catch[ Last[Last[ Reap[CheckAbort[ Throw[ParallelTable[Pause[0.1]; ParallelSow[x], {x, 30}]], ignored]]]]] Out[5]= {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, \ 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30} The Throw is used to jump outside the Reap if the ParallelTable finishes. (Getting messy!) To be safe this should be wrapped up in a function and tags (a.k.a. the optional second argument) should be used on the Throw, Catch, Sow, Reap.
{ "source": [ "https://mathematica.stackexchange.com/questions/313", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
334
I am looking for a simple, robust way to evaluate an expression only one step, and return the result in a held form. The definition of a single step is ambiguous, and this itself is probably worthy of exploration. Some interpretations will raise the question of what should be returned. I am specifically thinking of capturing the right-hand side of function definitions and rules. Another view would be the first evaluation step that transforms the entire input expression. Examples of desired output: x = 1; y = 1; q := 1 + 2 x + 3 y (* step[q] -----> HoldForm[1 + 2 x + 3 y] *) val = 1; f[x_] /; x < 5 := ("X < 5"; val) f[_, y_] := y val f[x_] := f[x - 1] (* step[ f[3] ] -----> HoldForm["X < 5"; val] *) (* step[ f[3, 4] ] -----> HoldForm[4 val] *) (* step[ f[5] ] -----> HoldForm[f[5 - 1]] *) For an internal function: x = 7; y = 4; (* step[ Mod[x Pi, y] ] -----> HoldForm[ Mod[7 Pi, 4] ] *) (* step[ Mod[7 Pi, 4] ] -----> HoldForm[ 7 Pi - Quotient[7 Pi, 4] 4 ] *) each because that is the first step in Trace that transforms the entire expression. I realize that for user-defined functions it is possible to manipulate *Values manually, but finding and matching all possible *Values is complicated, and I am looking for a universal approach using something like TraceScan . Trace keeps track of the level of evaluation with brackets, but TraceScan does not appear to provide this information to its given functions. It would be possible to use Trace and then extract the desired step afterward, but I want something that does not carry out the rest of the evaluation.
I believe I have found the solution I was seeking. It returns the first step that transforms the entire expression, and it does so without further evaluation. The P = (P = part is to skip the untransformed expression. SetAttributes[step, HoldAll] step[expr_] := Module[{P}, P = (P = Return[#, TraceScan] &) &; TraceScan[P, expr, TraceDepth -> 1] ] I hope that this function will be as helpful to others as I expect it will be to me.
{ "source": [ "https://mathematica.stackexchange.com/questions/334", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
353
Suppose you want to create a function which has optional arguments. Maeder's book Programming in Mathematica covers the topic extensively up to version 3, but some things have changed/evolved since then. So now what is the standard code to use for a function func[x, ...] which takes optional arguments?
The main change since that time seems to be that the modern way of using options is associated with OptionsPattern[] - OptionValue commands. A typical way of defining a function would be: Options[f] = {FirstOption -> 1, SecondOption -> 2}; f[x_, y_, opts : OptionsPattern[]] := Print[{x, y, OptionValue[FirstOption], OptionValue[SecondOption]}] The OptionsPattern[] is a pattern which is similar to ___?OptionQ in its purpose , but has subtle differences, some of which are discussed in this question . In the same question, it is discussed what are the major differences and advantages / disadvantages of old and new approaches. You can still use the old way though. The OptionValue command is a rather magical function, which knows which function you are in, so that you often don't have to supply the function name explicitly. However, you can always do so, since OptionValue has forms with more arguments. What you can not do is to mix the two approaches: if you declare options as ___?OptionQ , then OptionValue won't work. The second difference is that there is new built-in functions FilterRules , which can be used to filter options. Previously, there was a package by Maeder called FilterOptions , which provided similar functionality, but was not in the widespread use, just because not everyone knew about it. The typical options filtering call looks like g[x_, y_, opts : OptionsPattern[]] := f[x, y, Sequence@@FilterRules[{opts}, Options[f]]] Filtering options is a good practice, so this addition is quite useful. If you wanted to pass options that belong to other functions (e.g. functions that are called inside your function g ) you would do something like this, and it would work even if useQ was actually an option of the function p : g[x_, y_, opts : OptionsPattern[{g, f, p, q}]] := Module[ {s = If[OptionValue[useQ], q[y, FilterRules[q]], p[x, FilterRules[p]]]}, f[x, s, Sequence@@FilterRules[{opts}, Options[f]]] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/353", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/137/" ] }
373
Here's a small listing where I've used Esc q Esc to typeset θ in the notebook: Clear[ f, θ, Subscript[x, r] ] Subscript[x, r] := 3 f[θ_] := Subscript[x, r] Cos[θ] Plot[f[θ], {θ, 0, Pi}] (in my notebook this looked like $x_r$, not Subscript[x, r] for example). This produces a message from Clear of the form: Clear::ssym : x_r is not a symbol or a string What is curious is that I appear to be able to assign to this variable $x_r$ without any trouble, yet it is apparently treated differently than my other symbols f and θ . How exactly does Mathematica define a symbol. Why can I use $x_r$ like a variable, yet it does not have this symbol characterization?
Your code reveals exactly why Clear complains: Subscript[x, r] is not a Symbol nor a String . When you assign a value to it , you're setting a DownValue not an OwnValue ; in other words, you're setting the value of a function not a variable. To use $x_r$ as a symbol, use the Notation` package's function, Symbolize . I'd recommend using it from the palette directly, as it has all of the intricacies already set up for you.
{ "source": [ "https://mathematica.stackexchange.com/questions/373", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/10/" ] }
434
How can I express a trigonometric equation / identity in terms of a given trigonometric function? using following trigonometric identities Sin[x]^2+Cos[x]^2==1 Sin[x]/Cos[x]==Tan[x] Csc[x]==1/Sin[x] Sec[x]==1/Cos[x] Cot[x]==1/Tan[x] Examples $$\text{convert}(\sin x,\cos)\Rightarrow \pm\sqrt{1-\cos^2(x)}$$ $$\text{convert}(\cos x,\sin)\Rightarrow \pm\sqrt{1-\sin^2(x)}$$ $$\text{convert}\left(\frac{\cos x}{\sin x},\tan\right)\Rightarrow\frac{1}{\tan x}$$ convert[eqn_,trigFunc_]:=??
This is a new version of my answer in response to the edited question (the first version is here ). It is based on the same idea, but the Weierstrass substitution rules are now generated by Mathematica (instead of entered by hand) and results with $\pm$ solutions are correctly returned. First, generate the Weierstrass substitution rules $TrigFns = {Sin, Cos, Tan, Csc, Sec, Cot}; (WRules = $TrigFns == (Through[$TrigFns[x]] /. x -> 2 ArcTan[t] // TrigExpand // Together) // Thread) Then, Partition[WRules /. Thread[$TrigFns -> Through[$TrigFns[x]]], 2] // TeXForm returns $$ \begin{align} \sin (x)&=\frac{2 t}{t^2+1}\,, & \cos (x)&=\frac{1-t^2}{t^2+1}\,, \\ \tan (x)&=-\frac{2 t}{t^2-1}\,, & \csc (x)&=\frac{t^2+1}{2 t}\,, \\ \sec (x)&=\frac{-t^2-1}{t^2-1}\,, & \cot (x)&=\frac{1-t^2}{2 t} \ . \end{align} $$ Then, we invert the rules using invWRules = #[[1]] -> Solve[#, t, Reals] & /@ WRules which we can finally use in the convert function: convert[expr_, (trig : Alternatives@@$TrigFns)[x_]] := Block[{temp, t}, temp = expr /. x -> 2 ArcTan[t] // TrigExpand // Factor; temp = temp /. (trig /. invWRules) // FullSimplify // Union; Or @@ temp /. trig -> HoldForm[trig][x] /. ConditionalExpression -> (#1 &)] Note that the final line has HoldForm to prevent things like 1/Sin[x] automatically being rewritten as Csc[x] , etc... Here are some test cases - it is straight forward to check that the answers are correct (but don't forget to use RelaseHold ): In[6]:= convert[Sin[x], Cos[x]] Out[6]= - Sqrt[1 - Cos[x]^2] || Sqrt[1 - Cos[x]^2] In[7]:= convert[Sin[x]Cos[x], Tan[x]] Out[7]= Tan[x]/(1 + Tan[x]^2) In[8]:= convert[Sin[x]Cos[x], Cos[x]] Out[8]= -Cos[x] Sqrt[1 - Cos[x]^2] || Cos[x] Sqrt[1 - Cos[x]^2] In[9]:= convert[Sin[2x]Cos[x], Sin[x]] Out[9]= -2 Sin[x] (-1 + Sin[x]^2) In[10]:= convert[Sin[2x]Tan[x]^3, Cos[x]] Out[10]= 2 (-2 + 1/Cos[x]^2 + Cos[x]^2) A couple of quick thoughts about the above solution: It assumes real arguments for the trig functions. It would be nice if it didn't do this and could be extended to hyperbolic trig and exponential functions. When two solutions are given, it should return the domains of validity - or combine the appropriate terms using Abs[] . It should be extended to handle things like convert[Sin[x], Cos[2x]] . If anyone feels like implementing any of these things, please feel free!
{ "source": [ "https://mathematica.stackexchange.com/questions/434", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/99/" ] }
441
What is the most space and time efficient way to implement a Trie in Mathematica ? Will it be practically faster than what is natively available in appropriate cases?
UPDATE Since version 10, we have Associations. Here is the modified code for trie building and querying, based on Associations. It is almost the same as the old code (which is below): ClearAll[makeTreeAssoc]; makeTreeAssoc[wrds : {__String}] := Association@makeTreeAssoc[Characters[wrds]]; makeTreeAssoc[wrds_ /; MemberQ[wrds, {}]] := Prepend[makeTreeAssoc[DeleteCases[wrds, {}]], {} -> {}]; makeTreeAssoc[wrds_] := Reap[ If[# =!= {}, Sow[Rest[#], First@#]] & /@ wrds, _, #1 -> Association@makeTreeAssoc[#2] & ][[2]] You can see that the only difference is that Association is added to a couple of places, otherwise it's the same code. The lookup functions also are very similar: ClearAll[getSubTreeAssoc]; getSubTreeAssoc[word_String, tree_] := Fold[Compose, tree, Characters[word]] ClearAll[inTreeQAssoc]; inTreeQAssoc[word_String, tree_] := KeyExistsQ[getSubTreeAssoc[word, tree], {}] The tests similar to the ones below (for entire dictionary) show that the lookup based on this trie (Associations - based) is about 3 times faster than the one based on rules, for a trie built from a dictionary. The new implementation of getWords is left as an exercise to the reader (in fact, that function could be optimized a lot, by storing entire words as leaves in the tree, so that one doesn't have to use StringJoin and combine the words). A combination of rules and recursion is able to produce rather powerful solutions. Here is my take on it: ClearAll[makeTree]; makeTree[wrds : {__String}] := makeTree[Characters[wrds]]; makeTree[wrds_ /; MemberQ[wrds, {}]] := Prepend[makeTree[DeleteCases[wrds, {}]], {} -> {}]; makeTree[wrds_] := Reap[If[# =!= {}, Sow[Rest[#], First@#]] & /@ wrds, _, #1 -> makeTree[#2] &][[2]] ClearAll[getSubTree]; getSubTree[word_String, tree_] := Fold[#2 /. #1 &, tree, Characters[word]] ClearAll[inTreeQ]; inTreeQ[word_String, tree_] := MemberQ[getSubTree[word, tree], {} -> {}] ClearAll[getWords]; getWords[start_String, tree_] := Module[{wordStack = {}, charStack = {}, words}, words[{} -> {}] := wordStack = {wordStack, StringJoin[charStack]}; words[sl_ -> ll_List] := Module[{}, charStack = {charStack, sl}; words /@ ll; charStack = First@charStack; ]; words[First@Fold[{#2 -> #1} &, getSubTree[start, tree], Reverse@Characters[start]] ]; ClearAll[words]; Flatten@wordStack]; The last function serves to collect the words from a tree, by performing a depth-first tree traversal and maintaining the stack of accumulated characters and words. Here is a short example: In[40]:= words = DictionaryLookup["absc*"] Out[40]= {abscess,abscessed,abscesses,abscessing,abscissa,abscissae,abscissas, abscission,abscond,absconded,absconder,absconders,absconding,absconds} In[41]:= tree = makeTree[words] Out[41]= {a->{b->{s->{c->{e->{s->{s->{{}->{},e->{d->{{}->{}},s->{{}->{}}}, i->{n->{g->{{}->{}}}}}}},i->{s->{s->{a->{{}->{},e->{{}->{}},s->{{}->{}}}, i->{o->{n->{{}->{}}}}}}},o->{n->{d->{{}->{},e->{d->{{}->{}},r->{{}->{},s->{{}->{}}}}, i->{n->{g->{{}->{}}}},s->{{}->{}}}}}}}}}} In[47]:= inTreeQ[#,tree]&/@words Out[47]= {True,True,True,True,True,True,True,True,True,True,True,True,True,True} In[48]:= inTreeQ["absd",tree] Out[48]= False In[124]:= getWords["absce", tree] Out[124]= {"abscess", "abscessed", "abscesses", "abscessing"} I only constructed here a bare-bones tree, so you can only test whether or not the word is there, but not keep any other info. Here is a larger example: In[125]:= allWords = DictionaryLookup["*"]; In[126]:= (allTree = makeTree[allWords]);//Timing Out[126]= {5.375,Null} In[127]:= And@@Map[inTreeQ[#,allTree]&,allWords]//Timing Out[127]= {1.735,True} In[128]:= getWords["pro",allTree]//Short//Timing Out[128]= {0.015,{pro,proactive,proactively,probabilist, <<741>>,proximate,proximately,proximity,proxy}} In[129]:= DictionaryLookup["pro*"]//Short//Timing Out[129]= {0.032,{pro,proactive,proactively,probabilist,<<741>>, proximate,proximately,proximity,proxy}} I don't know which approach has been used for the built-in functionality, but the above implementation seems to be generally in the same calss for performance. The slowest part is due to the top-level tree-traversing code in getWords . It is slow because the top-level code is slow. One could speed it up considerably by hashing words to integers - then it can be Compiled . This is how I'd do that, if I were really concerned with speed. EDIT For a really nice application of a Trie data structure, where it allows us to achieve major speed-up (w.r.t. using DictionaryLookup , for example), see this post , where it was used it to implement an efficient Boggle solver.
{ "source": [ "https://mathematica.stackexchange.com/questions/441", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
451
Currently, one may save notebooks as PDFs from the menu by Save As... and then selecting PDF (on a Mac, I imagine it is similar on other OSs). However, the resulting PDF does not have preserve the syntax highlighting of the code, even though things like plots are coloured. Printing to a PDF has the same effect (again, all this on a Mac). Is there some way to save a notebook to PDF format so that syntax highlighting is preserved? Here's an example of what I mean: PDF: on-screen (mathematica notebook, screenshot): I feel that I am missing something obvious, but what?
The default style sheets set ShowSyntaxStyles -> False for the "Printout" environment. You could change the notebook to use a style sheet that doesn't set this. Probably the easiest way is to copy the definition from Default.nb, and modify it: Cell[StyleData[All, "Printout"], ShowSyntaxStyles->True]
{ "source": [ "https://mathematica.stackexchange.com/questions/451", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/16/" ] }
501
What determines the value of $MaxNumber ? $MaxNumber 1.233433712981650*10^323228458 Mathematica can instantly calculate: 44787922`! 1.0809571*10^323228455 But refuses to calculate: 44787923`! During evaluation of In[3]:= General::ovfl: Overflow occurred in computation. >> Overflow[] It seems like an arbitrary cutoff rather than a limitation of the system.
If you calculate Log[2,Log[2,$MaxNumber]] , you'll get 29.999999828017338886225739 which is remarkably close to 30. Therefore I conclude that Mathematica calculates with a 31-bit exponent (1 bit for the exponent's sign). Which means that if Mathematica uses the same ordering as IEEE floats (i.e. first sign bit, then exponent, then mantissa), the first 32 bits (i.e. exactly 4 bytes) of a Mathematica floating point number contain the sign and the exponent.
{ "source": [ "https://mathematica.stackexchange.com/questions/501", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
523
I'd like to add an extension to a filename before the file extension, otherwise leaving the given filename the same. In particular, absolute filenames should stay absolute, and relative filenames should stay relative. Examples: suppose the piece I want to insert is ".123" , then /home/me/dir/foo.txt becomes /home/me/dir/foo.123.txt . xyz.csv becomes xyz.123.csv foobar becomes foobar.123 (just append in case of no extension) I don't expect to be dealing with filenames that already have multiple extensions, but just in case, the desired behavior is nomnom.tar.gz becomes nomnom.123.tar.gz I can ensure that the filename does not end with a slash. The obvious way to do this is by concatenating the directory name, the file base name, the new piece, and the extension: insertPiece[fn_, piece_] := FileNameJoin[{ DirectoryName[fn], StringJoin[{FileBaseName[fn], piece, ".", FileExtension[fn]}] }] but is there some corner case I missed in which this wouldn't work? Is there a more efficient or more elegant way to do it?
If you calculate Log[2,Log[2,$MaxNumber]] , you'll get 29.999999828017338886225739 which is remarkably close to 30. Therefore I conclude that Mathematica calculates with a 31-bit exponent (1 bit for the exponent's sign). Which means that if Mathematica uses the same ordering as IEEE floats (i.e. first sign bit, then exponent, then mantissa), the first 32 bits (i.e. exactly 4 bytes) of a Mathematica floating point number contain the sign and the exponent.
{ "source": [ "https://mathematica.stackexchange.com/questions/523", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/79/" ] }
528
Does the Mathematica graphics system have any concept of intersecting graphics? I've not found much in the documents so far. For example, if I want to show the intersection of two shapes: Graphics[{Rectangle[], Disk[{0.2, 0}, .5]}] I know I can use Opacity : Graphics[{Opacity[0.8], Red, Rectangle[], Green, Disk[{0.2, 0}, .5]}] But is there a way of specifying the colours of intersecting areas directly? It doesn't seem to be possible to 'address' the intersecting shapes any other way. In the same vein, is is possible to 'extract' the graphical intersection of arbitrary shapes, without returning to the original geometry and calculating it? Could you obtain this type of entity easily given the above specification (these are just examples...!): I think it might be easier with raster images, but am interested for now in vector graphics.
How about RegionPlot ? RegionPlot[ { (x - 0.2)^2 + y^2 < 0.5 && 0 < x < 1 && 0 < y < 1, (x - 0.2)^2 + y^2 < 0.5 && ! (0 < x < 1 && 0 < y < 1), ! ((x - 0.2)^2 + y^2 < 0.5) && 0 < x < 1 && 0 < y < 1 }, {x, -1, 1.5}, {y, -1, 1.5}, PlotStyle -> {Red, Yellow, Blue} ] EDIT in response to Szabolcs's comment: PointInPoly[{x_, y_}, poly_List] := Module[{i, j, c = False, npol = Length[poly]}, For[i = 1; j = npol, i <= npol, j = i++, If[((((poly[[i, 2]] <= y) && (y < poly[[j, 2]])) || ((poly[[j, 2]] <= y) && (y < poly[[i, 2]]))) && (x < (poly[[j, 1]] - poly[[i, 1]])*(y - poly[[i, 2]])/(poly[[j, 2]] - poly[[i, 2]]) + poly[[i, 1]])), c = ¬ c];]; c] (from an answer I gave in MathGroup ) RegionPlot[{ PointInPoly[{x, y}, {{1, 3}, {3, 4}, {4, 7}, {5, -1}, {3, -3}}] && PointInPoly[{x, y}, {{2, 2}, {3, 3}, {4, 2}, {0, 0}}], PointInPoly[{x, y}, {{1, 3}, {3, 4}, {4, 7}, {5, -1}, {3, -3}}] && ¬ PointInPoly[{x, y}, {{2, 2}, {3, 3}, {4, 2}, {0, 0}}], ¬ PointInPoly[{x, y}, {{1, 3}, {3, 4}, {4, 7}, {5, -1}, {3, -3}}] && PointInPoly[{x, y}, {{2, 2}, {3, 3}, {4, 2}, {0, 0}}]}, {x, 0, 6}, {y, -4, 8}, PlotPoints -> 100, PlotStyle -> {Red, Yellow, Blue} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/528", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/61/" ] }
533
It is my practice to place Condition expressions on the left side of := and :> in almost every case. I find this to be more logical as it is part of the pattern With the exception of use inside Module , Block , or With on the RHS, which is a special case, the Condition depends only on the LHS, and therefore IMHO is more logically placed on the LHS Its behavior remains consistent when used with = and -> f[x_] /; x < 5 := 1 and g[x_] /; x < 5 = 1 behave similarly f[x_] := 1 /; x < 5 and g[x_] = 1 /; x < 5 behave differently The evaluation path is significantly less complicated Placing the condition on the RHS requires the internal use of RuleCondition and $ConditionHold which can significantly slow down simple functions. Clear[f, g] f[x_] /; OddQ[x] := 1 f[x_] := 0; g[x_] := 1 /; OddQ[x] g[x_] := 0; f[4] //Trace {f[4], {OddQ[4], False}, 0} g[4] //Trace {g[4],{{OddQ[4],False},RuleCondition[$ConditionHold[$ConditionHold[1]],False],Fail},0} a = Range@1*^6; Timing[f /@ a;] Timing[g /@ a;] {0.421, Null} {0.655, Null} Nevertheless, the documentation for Condition shows the RHS form and many experienced users also seem to favor this form. Which form should be standard, and why? A brief edit: The form f[x_ /; x < 5] := 1 is what I use most often as should be clear to those who read my answers on StackOverflow. I omitted this form specifically because I didn't want to spawn a discussion (bad for SE sites) about purely-stylistic differences. I see now that this may have had the opposite effect. Rather I wish to focus this question on the apparently canonical yet IMHO inferior RHS placement and what its merits are.
I prefer the Condition to appear on the left-hand-side and outside the square brackets for several reasons. Type signature I often think of the condition as (part of) the analog of the signature in a typed language, so it should go on the left hand side. Order of operations I like that the elements of the function definition appear in the order in which I want them to happen: f[x_] /; x > 0 := Sqrt[x] Look for f[x_]. Check that x > 0. Return Sqrt[x]. (Optional) Check any postcondition (see below). Function contract When an argument-checking definition of the form f[else___] := Throw["Error in f."] appears, a left-hand-side Condition often plays the role of a precondition in the sense of Design By Contract . A Condition can also appear on the right-hand-side and this plays the role of a postcondition : f[x_] /; x > 0 := Sqrt[x] /; Sqrt[x] > 0 Consistency of appearance I prefer f[x_] /; x > 0 to the alternative f[x_ /; x > 0] for consistency, because sometimes placing the Condition inside the square brackets is not possible, such as when the Condition depends on multiple arguments: f[x_, y_] /; x > y := 1/(x - y) Update: Rationale I think Brett's preference of putting the Condition as close as possible to the quantity to which it applies is equally good so I want to explain why I ended up with my slightly different preference. Basically I was writing a sequence of definitions like this, following Brett's guideline: f[x_ /; c1[x], y_] := this f[x_, y_ /; c2[y]] := that f[x_, y_] /; c3[x, y] := other Note that all of these define f[x, y]. So there are two things I didn't like about that: The key difference between each LHS is the different conditions on x and y, and these are difficult to read quickly here because they all start at different places and are mixed in with f[x_, y_]. When a condition needs to change such that it suddenly starts or stops depending on x or y, I need to move it from inside the square brackets to outside or vice versa. Now compare: f[x_, y_] /; c1[x] := this f[x_, y_] /; c2[y] := that f[x_, y_] /; c3[x, y] := other Of course, what would make even more sense would be to adhere to Brett's guideline except in special cases like above! Maybe I will try that now ...
{ "source": [ "https://mathematica.stackexchange.com/questions/533", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
544
SubValues , as discussed in a previous question , are declared as follows f[x_][y_] := {ToString[Unevaluated[x]], ToString[Unevaluated[y]]} But, attempting to use SetAttributes on f only affects the DownValues of f during evaluation, not the SubValues . In other words, if HoldAll is set on f , then only x , in the above code, is held. In code, SetAttributes[f, HoldAll] f[ 1 + 2 ][ 3 + 4 ] (* ==> { "1 + 2", "7" } *) Attempting to use SetAttributes on f[x] results in the error SetAttributes::sym: "Argument f[x] at position 1 is expected to be a symbol." and, similarly, for f[x_] simply because neither are symbols. A work around is not to set a SubValue directly, but, instead, return a pure function and use the third argument to set the attribute, as follows SetAttributes[g, HoldAll] g[x_] := Function[{y}, {ToString[Unevaluated[x]], ToString[Unevaluated[y]]}, {HoldAll} ] g[ 1 + 2 ][ 3 + 4 ] (* ==> {"1 + 2", "3 + 4"} *) But, SubValues[g] returns an empty list, indicating that while equivalent, this construct is not processed in the same manner. So, how does one set the attributes on f such that the SubValues are affected during evaluation?
Your question really is about how to make attributes of f affect also the evaluation of other groups of elements, like y and z in f[x___][y___][z___] . To my knowledge, you can not do it other than using tricks like returning a pure function and the like. This is because, the only tool you have to intercept the stages of evaluation sequence when y and z are evaluated, is the fact the heads are evaluated first. So, anything you can do to divert the evaluation from its standard form (regarding y and z ), must be related to evaluation of f[x] , in particular substituting it by something like a pure function. Once you pass that stage of head evaluation, you have no more control of how y and z will be evaluated, as far as I know. Generally, I see only a few possibilities to imitate this: return a pure function with relevant attributes (as discussed in the linked answer) return an auxiliary symbol with relevant attributes (similar to the first route) play with evaluation stack. An example of this last possibility can be found in my answer here Here is another example with Stack , closer to those used in the question: ClearAll[f]; f := With[{stack = Stack[_]}, With[{fcallArgs = Cases[stack, HoldForm[f[x_][y_]] :> {ToString[Unevaluated[x]], ToString[Unevaluated[y]]}]}, (First@fcallArgs &) & /; fcallArgs =!= {}]]; And: In[34]:= f[1 + 2][3 + 4] // InputForm Out[34]//InputForm= {"1 + 2", "3 + 4"} Perhaps, there are other ways I am not aware of. The general conclusion I made for myself from considering cases like this is that the extent to which one can manipulate evaluation sequence is large but limited, and once you run into a limitation like this, it is best to reconsider the design and find some other approach to the problem, since things will quickly get quite complex and go out of control.
{ "source": [ "https://mathematica.stackexchange.com/questions/544", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/52/" ] }
547
Mathematica can use ContourPlot to draw implicit Cartesian equations, but doesn't seem to have a similar function to plot an implicit polar equation, for example $\theta ^2=\left(\frac{3 \pi }{4}\right)^2 \cos (r)$ What's the best way to do this?
Since ContourPlot[] returns a GraphicsComplex , you could also replace the point list of the plot with g @@@ pointlist where g is the coordinate transformation. For example f[r_, th_] := th^2 - (3 Pi/4)^2 Cos[r] g[r_, th_] := {r Cos[th], r Sin[th]} pl = ContourPlot[f[r, th] == 0, {r, 0, 8 Pi}, {th, 0, 2 Pi}, PlotPoints -> 30]; pl[[1, 1]] = g @@@ pl[[1, 1]]; Show[pl, PlotRange -> All] which produces The advantage of this method is that it also works for coordinate transformations for which the inverse transformation is hard to find.
{ "source": [ "https://mathematica.stackexchange.com/questions/547", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/23/" ] }
552
How do I substitue z^2->x in the following polynomial z^4+z^2+4 ? z^4+z^2+4 /. z^2->x gives z^4+x+4
The reason why the replacement doesn't work is that replacement rules are not mathematical replacements, but pure structural replacements. Therefore the replacement z^2->x just looks for occurrences of the pattern z^2 and replaces that with x . Now z^4 doesn't match that pattern. Also note that rules operate on the internal form, which doesn't always match the displayed form. For example, one would expect a-2b /. 2b->c to result in a-c , but it actually results in a-2b again, because internally the expression reads Plus[a, Times[-2, b]] (you can see that by applying FullForm ), while 2b is Times[2,b] . To do the replacement wanted, one has to use a method which is aware of the mathematics instead of just the structure. One possibility is Solve[p==z^4+z^2+4 && x==z^2, {p}, {z}] which means "Solve the equations given for p while eliminating z". The result then is {{p->4+x+x^2}} Note that the curly braces around z are mandatory because otherwise Mathematica interprets it as domain, resulting in an error message because z is of course no valid domain. Also note that the documentation page of Solve omits the possibility of giving a list of variables to eliminate as third argument (at least I didn't find it). However, you'll find it in a Mathematica tutorial on eliminating variables (but there they use the third argument without braces, which at least for me results in an error message, as written above).
{ "source": [ "https://mathematica.stackexchange.com/questions/552", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/82/" ] }
559
Mathematica includes three functions that I know of which can be used to effectively (if not actually) define a variable and give it a value within a local scope: Module , Block , and With . What are the differences between these functions (and any other scoping constructs that may exist which I don't know about), and how do I decide which one is appropriate to use in any given instance? The documentation does describe what the different constructs do to some extent, but I feel like it's insufficiently clear on how exactly they differ from each other. Having some short examples that demonstrate the differences would be great.
You will find a lot of information in this answer. I will add a few personal notes. Module Use Module when you want to localize variables inside your function's body, and those variables will potentially acquire and/or change their values during the computation. Basic use For example: f[x_]:=Module[{y=x^2},y=y+x;{x,y}] Here, a local mutable variable (symbol) y is local to the Module , and is, indeed, a symbol with a unique name. This is the closest you have in Mathematica to, say, local variables in C. Advanced uses Module also has advanced uses. One of them is to create closures - functions with a persistent state. My third post in this thread illustrates many cases of that and has further references. One example I will steal from there: the following function will produce the next Fibonacci number on demand, and yet it will be as fast as the iterative loop implementation for generation of consecutive Fibonacci numbers (since Module is invoked only once, when the function is defined): Module[{prev, prevprev, this}, reset[] := (prev = 1; prevprev = 1); reset[]; nextFib[] := (this = prev + prevprev; prevprev = prev; prev = this) ]; reset[]; Table[nextFib[], {1000}]; // Timing (* ---> {0.01, Null} *) One problem with persistence created with Module -variables is that one should not generally serialize such state (definitions), for example by saving the state via Save or DumpSave . This is because, the uniqueness of names for Module -generated symbols is guaranteed only within a single Mathematica session. Module also allows one to create local functions , which With does not (except pure functions). This is a very powerful capability. It is particularly useful for writing recursive functions, but not only. In the link mentioned above, there were examples of this. One problem with local functions created by Module is that these symbols won't be automatically garbage-collected when Module finishes (if they have DownValues , SubValues or UpValues . OwnValues are fine), and so may lead to memory leaks. To avoid that, one can Clear these symbols inside Module before returning the result. With Use With to define local constants, which can not be changed inside the body of your function. Basic use For example, f[x_,y_]:=With[{sum = x+y},{sum *x, sum *y}] It is instructive to trace the execution of f . You will notice that sum gets replaced by its value very early on, before the body starts evaluating. This is quite unlike Module , where variable entries get replaced by their values in the process of evaluation, just as it would normally happen were the variables global. Advanced uses On an advanced level, With can be used to inject some evaluated code deep into some expression which is otherwise unevaluated: With[{x=5},Hold[Hold[x^2]]] (* Hold[Hold[5^2]] *) and is thus an important meta-programming tool. There are lots of uses for this feature, in particular one can use this to inject code into Compile at run-time right before compilation. This can extend the capabilities / flexibility of Compile quite a bit. One example can be found in my answer to this question. The semantics of With is similar to that of rule substitutions, but an important difference is that With cares about inner scoping constructs (during variable name collisions), while rules don't. Both behaviors can be useful in different situations. Module vs With Both of these are lexical scoping constructs, which means that they bind their variables to lexical their occurrences in the code. Technically, the major difference between them is that you can not change the values of constants initialized in With , in the body of With , while you can change values of Module variables inside the body. On a deeper level, this is because With does not generate any new symbols. It does all the replacements before the body evaluates, and by that time no "constant symbols" are at all present, all of them replaced with their values. Module , OTOH, does generate temporary symbols (which are normal symbols with an attribute Temporary ), which can store a mutable state. Stylistically, it is better to use With if you know that your variables are in fact constants, i.e. they won't change during the code execution. Since With does not create extra (mutable) state, the code is cleaner. Also, you have more chances to catch an occasional erroneous attempt in the code to modify such a constant. Performance-wise, With tends to be faster than Module , because it does not have to create new variables and then destroy them. This however usually only shows up for very light-weight functions. I would not base my preference of one over another on performance boosts. Block Basic use Block localizes the value of the variable. In this example, a does not refer to i literally inside Block , but still uses the value set by Block . a:=i Block[{i=2},a] {a,i} Block therefore affects the evaluation stack , not just the literal occurrences of a symbol inside the code of its body. Its effects are much less local than those of lexical scoping constructs, which makes it much harder to debug programs which use Block extensively. It is not much different from using global variables, except that Block guarantees that their values will be restored to their previous values once the execution exits Block (which is often a big deal). Even so, this non-transparent and non-local manipulation of the variable values is one reason to avoid using Block where With and / or Module can be used. But there are more (see below). In practice, my advice would be to avoid using Block unless you know quite well why you need it. It is more error-prone to use it for variable localization than With or Module , because it does not prevent variable name collisions, and those will be quite hard to debug. One of the reasons people suggest to use Block is that they claim it is faster. While it is true, my opinion is that the speed advantage is minimal while the risk is high. I elaborated on this point here , where at the bottom there is also an idiom which allows one to have the best of both worlds. In addition to these reasons, as noted by @Albert Retey, using Block with the Dynamic - related functionality may lead to nasty surprises, and errors resulting from that may also be quite non-local and hard to find. One valid use of Block is to temporarily redefine some global system settings / variables. One of the most common such use cases is when we want to temporarily change the value of $RecursionLimit or $IterationLimit variables. Note however that while using Block[{$IterationLimit = Infinity}, ...] is generally okay, using Block[{$RecursionLimit = Infinity}, ...] is not, since the stack space is limited and if it gets exhausted, the kernel will crash. A detailed discussion of this topic and how to make functions tail-recursive in Mathematica, can be found e.g. in my answer to this question . It is quite interesting that the same ability of Block can be used to significantly extend the control the user has over namespaces/symbol encapsulation. For example, if you want to load a package, but not add its context to the $ContextPath (may be, to avoid shadowing problems), all you have to do is Block[{$ContextPath}, Needs[Your-package]] As another example, some package you want to load modifies some other function (say, System`SomeFunction ), and you want to prevent that without changing the code of the package. Then, you use something like Block[{SomeFunction}, Needs[That-package]] which ensures that all those modifications did not affect actual definitions for SomeFunction - see this answer for an example of this. Advanced uses Block is a very powerful metaprogramming device, because you can make every symbol (including system functions) temporarily "forget" what it is (its definitions and other global properties), and this may allow one to change the order of evaluation of an expression involving that symbol(s) in non-trivial ways, which may be hard to achieve by other means of evaluation control (this won't work on Locked symbols). There are many examples of this at work, one which comes to mind now is the LetL macro from my answer to this question. Another more advanced use of Block is to ensure that all used variables would be restored to their initial values, even in the case of Abort or exception happening somewhere inside the body of Block . In other words, it can be used to ensure that the system will not find itself in an illegal state in the case of sudden failure. If you wrap your critical (global) variables in Block , it will guarantee you this. A related use of Block is when we want to be sure that some symbols will be cleared at the end. This question and answers there represent good examples of using Block for this purpose. Variable name conflicts In nested scoping constructs, it may happen that they define variables with the same names. Such conflicts are typically resolved in favor of the inner scoping construct. The documentation contains more details. Block vs Module/With So, Block implements dynamic scoping, meaning that it binds variables in time rather than in space. One can say that a variable localized by Block will have its value during the time this Block executes (unless further redefined inside of it, of course). I tried to outline the differences between Block and With / Module (dynamic vs lexical scoping) in this answer. Some conclusions For most common purposes of variable localization, use Module For local constants, use With Do not ordinarily use Block for introducing local variables All of the scoping constructs under discussion have advanced uses. For Module this is mostly creating and encapsulating non-trivial state (persistent or not). For With , this is mostly injecting inside unevaluated expressions. For Block , there are several advanced uses, but all of them are, well, advanced. I'd be worried if I found myself using Block a lot, but there are cases when it is indispensable.
{ "source": [ "https://mathematica.stackexchange.com/questions/559", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/79/" ] }
573
I would like to list all possible times in a 12-hour period, where the hour hand overlaps the minute hand completely. This is really a question about three distinct things - to be done in Mathematica - though only the first of the following points really interests me: Finding solutions to an equation with several variables, where each variable has different restrictions on its domain; Formatting those solutions in a convenient format; Using those solutions to draw the actual clocks! So what we have is two different variables, $h$ and $m$. $h$ is an integer, between $0$ and $11$, and $m$ is a real number, between $0$ and $60$. We want solutions to when $$ 30h - \frac{11m}{2}=0 $$ within those domain restrictions. I have tried using both Solve and Reduce , but neither seems capable of giving me anything more than conditional expressions. So I would like to find all possible solutions, format them in a nice hour:minute format, and maybe even draw the actual clocks corresponding to these solutions. The formatting and drawing are actually not that hard (though elegant, efficient, and different solutions are certainly welcome); I am truly interested in mixing the domains for $h$ and $m$ above, and getting meaningful answers. Please feel free to tag as you feel appropriate.
I don't think it's necessary to use all the apparatus of Solve or Reduce here. When you think about it, at one o'clock, the hour hand is on the 1, which corresponds to five minutes. So the hands meet a little after five past one. The solution is therefore that $m = 60 (\frac{h}{11})$. Someone else might show how this can be solved explicitly. Here is a short piece of code that finds the correct times and formats them nicely as "HH:MM:ss. DateString[{2012, 1, 23, #, 60. (# )/11}, {"Hour12", ":", "Minute", ":", "Second"}] & /@ Range[0, 11] {"12:00:00", "01:05:27", "02:10:54", "03:16:21", "04:21:49", "05:27:16", "06:32:43", "07:38:10", "08:43:38", "09:49:05", "10:54:32", "12:00:00"} Edit to include equation solving approach To do this in a more complex situation that actually involves Solve , something along these lines would work: soln = m /. First@Solve[30 h - 11 m /2 == 0, m, Reals] (60 h)/11 All the solutions are actually in the correct domain of $m$ when $h \in\{1,...,11\}$, but here is what you would need to do to check this. times = Select[Table[{h, N@soln}, {h, 0, 11}], 0 .<= #[[2]] <= 60. &] Convert to date strings: strings = DateString[Join[{2012, 1, 23}, #], {"Hour12", ":", "Minute"}] & /@ times {"12:00", "01:05", "02:10", "03:16", "04:21", "05:27", "06:32", "07:38", "08:43", "09:49", "10:54", "12:00"} Convert to the necessary angle units. degrees = {1, -6 Degree}*# & /@ times {{0, 0.}, {1, -0.571199}, {2, -1.1424}, {3, -1.7136}, {4, -2.28479}, {5, -2.85599}, {6, -3.42719}, {7, -3.99839}, {8, -4.56959}, {9, -5.14079}, {10, -5.71199}, {11, -6.28319}} Graphics[{Circle[{0, 0}, 0.85], MapThread[ Text[#1, {Cos[#2 + Pi/2], Sin[#3 + Pi/2]}] &, {strings, degrees[[All, 2]], degrees[[All, 2]]}]}, ImageSize -> 250]
{ "source": [ "https://mathematica.stackexchange.com/questions/573", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/191/" ] }
574
For example, this input... {{1, 2}, {3, 4}}*{{1, 2}, {3, 4}} produces this output... {{1, 4}, {9, 16}} and this input... {{1, 2}, {3, 4}}^2 produces the same... {{1, 4}, {9, 16}} What I want in both cases is... {{7, 10}, {15, 22}} I think I know what's going on here. Mathematica doesn't seem to be discriminating between lists of lists, and matrices, and the * and ^ operators are just threading over the lists. I've found . and MatrixPower which do what I think * and ^ should. But the question still remains, is this confusing (to at least me) behavior of * and ^ by design, and what benefit does it confer?
Matrices in Mathematica are nothing but a specific type of list of lists — specifically, a two dimensional list of lists. * is the short form for the Times function, which threads over lists elementwise, and this is what you'd use if you wanted to take the Hadamard product of two matrices. So when you say A*B , you're actually saying Times[A, B] . . on the other hand, is short form for Dot , which lets you take the usual matrix products. So A.B is equivalent to Dot[A, B] . Both of these are different and it just boils down to understanding and remembering the short forms and the functions they represent. If you're coming from a language like MATLAB, you might be confused at first, because * and ^ indeed do behave the way you described in that language. Although one should familiarize themselves with each language's differences, this might help you in remembering it — * and ^ behave exactly like .* and .^ respectively in MATLAB, in that they operate element wise. Whether it is intuitive or not depends on your personal preferences (and experience with other languages). In the same vein, you could also ask why Infix is ~ , when MATLAB treats it as the not operator or throwaway variable, depending on how you use it :)
{ "source": [ "https://mathematica.stackexchange.com/questions/574", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/111/" ] }
601
I wonder if this is a bug, or if I'm misunderstanding something: Exists[n, EvenQ[n] && PrimeQ[n]] // Resolve (* ==> False *) So if I interpret this result correctly, according to Mathematica there does not exist an even prime. However, if given the number directly, it gives the correct result: EvenQ[2] && PrimeQ[2] (* ==> True *) So did I find a bug in Mathematica (and if so, is it fixed in the latest version)? Or did I misunderstand Resolve ?
Note: I am not particularly knowledgable in the field of this question, so what I write below may well be wrong. I don't know whether or not this should be considered a bug, but to my mind this is an instance of a clash of programming and mathematical functionality. To put it differently, predicates (functions ending with Q ) seem to be a wrong match for things like FindInstance or Resolve , because of their evaluation semantics. Functions suitable for mathematical transformations tend to return unevaluated when they don't know what to do, which gives the outer functions a chance to further transform them as expressions. OTOH, predicates will always return False immediately when they can not establish the the condition they check is True . By using Trace[Exists[n,EvenQ[n]&&PrimeQ[n]]//Resolve, TraceInternal->True] , one can see that at some point, both EvenQ and PrimeQ evaluate to False , and this is the reason for the result. Moreover, even a simpler request Exists[n,EvenQ[n]]//Resolve (* --> False *) However, this will work: FindInstance[IntegerPart[n/2]*2==n && n>1 &&n<4 ,n,Integers] (* --> {{n->2}} *) I wasn't able to make the original request work (I tried using Divisors , but no luck). But my point is that recasting the condition as a set of equations and/or inequalities may increase the chances of success here, because their evaluation semantics is that of the mathematical rather than programming functionality. The borderline seems to be quite blurred, but I think it is there.
{ "source": [ "https://mathematica.stackexchange.com/questions/601", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/129/" ] }
608
How to programmatically build random directed acyclic graphs ( DAG )? I know about the AcyclicGraphQ predicate and the TopologicalSorting function, though Mathematica does not offer any algorithm to create such networks. Anyone has some experience in this topic? The following random method does not guarantee that the result will be acyclic: graph = RandomGraph[{10, 10}, DirectedEdges -> True] AcyclicGraphQ@graph (* ==> True *)
Note that @halmir's solution does the same thing as described below, but much more concisely. I recommend using that approach. The idea is that the graph is acyclic if and only if if there exists a vertex ordering which makes the adjacency matrix lower triangular¹. It's easy to see that if the adjacency matrix is lower triangular, then vertex $i$ can only be pointing to vertex $j$ if $i<j$ . So let's generate a matrix which has zeros and ones uniformly distributed under the diagonal: vertexCount = 10; edgeCount = 30; elems = RandomSample@ PadRight[ConstantArray[1, edgeCount], vertexCount (vertexCount - 1)/2] adjacencyMatrix = Take[ FoldList[RotateLeft, elems, Range[0, vertexCount - 2]], All, vertexCount ] ~LowerTriangularize~ -1 ( Thanks to @Mr.Wizard for the code that fills the triangular matrix! ) graph = AdjacencyGraph[adjacencyMatrix] AcyclicGraphQ[graph] (* ==> True *) LayeredGraphPlot will show you the acyclic structure in a "convincing" way: You did not say it explicitly, but I assume you need a connected graph. Unfortunately I have no algorithm that gives you a connected one, but you can keep generating them until you find a connected one by accident (brute force). If the connectance is very low, and you get very few connected ones, you can try generating graphs with a slightly higher vertex count than the required one until the largest connected component has the required vertex count. Packed into a function for convenience: randomDAG[vertexCount_, edgeCount_] /; edgeCount < vertexCount (vertexCount - 1)/2 := Module[ {elems, adjacencyMatrix}, elems = RandomSample@ PadRight[ConstantArray[1, edgeCount], vertexCount (vertexCount - 1)/2]; adjacencyMatrix = Take[ FoldList[RotateLeft, elems, Range[0, vertexCount - 2]], All, vertexCount ] ~LowerTriangularize~ -1; AdjacencyGraph[adjacencyMatrix] ] ¹ You can find the ordering that makes the adjacency matrix triangular using a topological sort .
{ "source": [ "https://mathematica.stackexchange.com/questions/608", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
627
I would like to plot those two datasets on top of each other. But they have very different range on the $y$ axis. How can I have two different axis? I found the following on the help menu but quite esoteric for me and I can`t adapt it to data (vs. function): TwoAxisPlot[{f_, g_}, {x_, x1_, x2_}] := Module[{fgraph, ggraph, frange, grange, fticks, gticks}, {fgraph, ggraph} = MapIndexed[ Plot[#, {x, x1, x2}, Axes -> True, PlotStyle -> ColorData[1][#2[[1]]]] &, {f, g}]; {frange, grange} = (PlotRange /. AbsoluteOptions[#, PlotRange])[[ 2]] & /@ {fgraph, ggraph}; fticks = N@FindDivisions[frange, 5]; gticks = Quiet@ Transpose@{fticks, ToString[NumberForm[#, 2], StandardForm] & /@ Rescale[fticks, frange, grange]}; Show[fgraph, ggraph /. Graphics[graph_, s___] :> Graphics[ GeometricTransformation[graph, RescalingTransform[{{0, 1}, grange}, {{0, 1}, frange}]], s], Axes -> False, Frame -> True, FrameStyle -> {ColorData[1] /@ {1, 2}, {Automatic, Automatic}}, FrameTicks -> {{fticks, gticks}, {Automatic, Automatic}}]]
This can be done with Overlay if the ImagePadding and the horizontal range for each plot is the same. For example, plot1 = ListLinePlot[ Accumulate[RandomReal[{0, 1}, {100}]], PlotStyle -> Blue, ImagePadding -> 25, Frame -> {True, True, True, False}, FrameStyle -> {Automatic, Blue, Automatic, Automatic} ] plot2 = ListLinePlot[ Accumulate[RandomReal[{0, 100}, {100}]], PlotStyle -> Red, ImagePadding -> 25, Axes -> False, Frame -> {False, False, False, True}, FrameTicks -> {{None, All}, {None, None}}, FrameStyle -> {Automatic, Automatic, Automatic, Red} ] Overlay[{plot1, plot2}] Edit: Cleared up which axis is which using FrameStyle .
{ "source": [ "https://mathematica.stackexchange.com/questions/627", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/172/" ] }
637
The following line of code finds the edge of a character: pic = Binarize[GradientFilter[Rasterize[Style["\[Euro]", FontFamily -> "Times"], ImageSize -> 200] // Image, 1]] The coordinates of the edges can be found as follows: pdata = Position[ImageData[pic], 1]; Test: Graphics[Point[pdata]] However, the points are not sorted in an order usable by Line or Polygon : Graphics[Polygon[pdata]] This brings me to my question: What would be an efficient method to sort the coordinates so that it would plot properly with Line or Polygon ? Additionally, How to thin and smooth the set of points? How to deal with characters with holes in them, like the ones below? or
I think there is a neat solution. We have curios function ListCurvePathPlot : pic = Thinning@Binarize[GradientFilter[Rasterize[Style["\[Euro]", FontFamily -> "Times"], ImageSize -> 200] // Image, 1]]; pdata = Position[ImageData[pic], 1]; lcp = ListCurvePathPlot[pdata] Now this is of course Graphics containing Line with set of points lcp[[1, 1, 3, 2]] So of course we can do something like Graphics3D[Table[{Orange, Opacity[.5],Polygon[(#~Join~{10 n})& /@ lcp[[1, 1, 3, 2, 1]]]}, {n, 10}], Boxed -> False] I think it works nicely with "8" and Polygon : pic = Thinning@Binarize[GradientFilter[ Rasterize[Style["8", FontFamily -> "Times"], ImageSize -> 500] //Image, 1]]; pdata = Position[ImageData[pic], 1]; lcp = ListCurvePathPlot[pdata] And you can do polygons 1-by-1 extraction: Graphics3D[{{Orange, Thick, Polygon[(#~Join~{0}) & /@ lcp[[1, 1, 3, 2, 1]]]}, {Red, Thick, Polygon[(#~Join~{1}) & /@ lcp[[1, 1, 3, 3, 1]]]}, {Blue, Thick, Polygon[(#~Join~{200}) & /@ lcp[[1, 1, 3, 4, 1]]]}}] => To smooth the curve set ImageSize -> "larger number" in your pic = code. => To thin the curve to 1 pixel wide use Thinning : Row@{Thinning[#], Identity[#]} &@Binarize[GradientFilter[ Rasterize[Style["\[Euro]", FontFamily -> "Times"], ImageSize -> 200] // Image, 1]] You can do curve extraction more efficiently with Mathematica. A simple example would be text = First[ First[ImportString[ ExportString[ Style["\[Euro] 9 M-8 ", Italic, FontSize -> 24, FontFamily -> "Times"], "PDF"], "PDF", "TextMode" -> "Outlines"]]]; Graphics[{EdgeForm[Black], FaceForm[], text}]
{ "source": [ "https://mathematica.stackexchange.com/questions/637", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/57/" ] }
669
This is always a problem when I distribute my packages to colleagues working on different platforms. I have my packages installed into a private directory and I autoload some of them when Mathematica starts, but since everyone has a different folder hierarchy and different customs on where to put files/what to autoload, this is always a bottleneck. Should non-power-users simply use the menu option: Install... ? Can a foolproof script be defined, that - no matter what platform is in use - installs these packages in a way that both the user and both Mathematica (and the code I wrote and distributed) can find it? That is, packages are not added to the system-default package directory. Or would it be better to put packages into the default directory of Mathematica (e.g. C:\Applications\Mathematica 8.0.1\AddOns )? If yes, then which directory should hold them: LegacyPackages , ExtraPackages or Packages ? Further questions: What if the package that I send consists of multiple packages, in a delicate directory structure? Can the menuitem Install... deal with that?
Note: This answer was originally written with package authors in mind, aiming to provide a user-friendly way to distribute packages and provide installation instructions. If your aim is user convenience, today you should be using paclets instead . I think that the menu item File -> Install... is very convenient, even for power users. The only problem is that there is no uninstall option. However, if the package consists of a single file, upgrading is easy: the old file will be overwritten with the new file. You can write some simple instructions for users: Open the .m file you sent them Choose File -> Install... Choose Type -> Package, Source -> (the open notebook), Install Name -> SomePackage Load the package by evaluating <<SomePackage` . The only thing that can go wrong is that they mistype the install name. The Install... menu item will put the package into FileNameJoin[{$UserBaseDirectory, "Applications"}] which on Windows is %appdata%\Mathematica\Applications (Press Win - R and type the above to get to that directory.) When necessary, the package file can be deleted from that directory. This is the usual way I install and upgrade palettes myself. Putting packages into the Mathematica installation directory is not really advantageous because they will get lost when Mathematica is upgraded (for example, from 8.0.1 to 8.0.4). Instead they can be put into $BaseDirectory/Application (for all users) or $UserBaseDirectory/Application (for the current user). This is what the Install... menu item does. It seems that the Install... menu item can deal with multi-file packages too. "Type of Item to Install" should be set to "Applications", "Source" -> "From File..." and the package files need to be inside an archive (.zip file). I have not used this personally, so I have no experience with it (e.g. about what happens on upgrade). @AlbertRetey noted below that the Wolfram Player Pro does not have this menu item at all. The only way to install packages into it is to do it manually or create a script that does it.
{ "source": [ "https://mathematica.stackexchange.com/questions/669", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
680
I've created a notebook for use in an in-class presentation. There is a fair amount of MMA code, and my students know nothing about MMA nor do they need to. I don't want to distract them with the code. I understand how to collapse the input cells and show only the output, but the input cells expand whenever I evaluate the notebook so I'm back to where I started. This notebook is meant to be interactive, so it has to be evaluated whenever I change the input. In this case, I have an InputField that requests a ticker symbol and then the code uses FinancialData to get the data and from that creates the results. I've tried creating a slideshow, but that has the same behavior. I've looked at the resources and answer in " Best way to give presentations with Mathematica " without finding any mention of this issue. So, in MMA 8.04 is there any way to force the input cells to stay hidden after the notebook is evaluated? Surely, there must be an option for this somewhere.
AutoCollapse[] function Please try this code, based on Sasha's adaption of my own answer to this question . AutoCollapse[] := ( If[$FrontEnd =!= $Failed, SelectionMove[EvaluationNotebook[], All, GeneratedCell]; FrontEndTokenExecute["SelectionCloseUnselectedCells"]]) Then in a new cell: 2 + 2 AutoCollapse[] Always place AutoCollapse[] as the last line of an Input cell. Stylesheets To get the behavior without having to include AutoCollapse[] in each cell you can use Stylesheets and CellEpilog . For example to create an InputHidden style use menu Format > Edit Stylesheet... and then add a Cell with the following code (use Ctrl + Shift + E to edit Cell code): Cell[StyleData["InputHidden", StyleDefinitions -> StyleData["Input"]], CellEpilog :> (SelectionMove[EvaluationNotebook[], All, GeneratedCell]; FrontEndTokenExecute["SelectionCloseUnselectedCells"]), MenuSortingValue -> 1510 , MenuCommandKey -> "8" ] This creates a new style that behaves like Input but which auto-collapses when evaluated. MenuCommandKey -> "8" lets it be quickly applied using Alt + 8 ; change or remove this line as desired. I may be reading more into your question than is there. As Heike points you can close the input cells manually by deselecting menu Cell > Cell Properties > Open but I assumed you knew this already and provided the soluition(?) above. If all you need is a hidden cell that generates output, use the menu. If you need something a little more flexible that automatically hides after you make your changes I hope you will find the methods above useful.
{ "source": [ "https://mathematica.stackexchange.com/questions/680", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/130/" ] }
687
I'd like to display field lines for a point charge in 3 dimensions. Not a force field (short arrows) but continuous field lines that start on the charge.
This is something I have used for my classes. Over time, I've tried to make it more and more user friendly, but that's also made it a little longish. I'll post the complete set of functions, with apologies if it's a bit unwieldy... As you'll see, I found it does indeed work better in my use cases if I normalize the field, so that we advance along the field lines in more balanced steps. The hardest part in applying these functions is to choose the appropriate seed points. fieldSolve::usage = "fieldSolve[f,x,x0,\!\(\*SubscriptBox[\(t\), \(max\)]\)] \ symbolically takes a vector field f with respect to the vector \ variable x, and then finds a vector curve r[t] starting at the point \ x0 satisfying the equation dr/dt=\[Alpha] f[r[t]] for \ t=0...\!\(\*SubscriptBox[\(t\), \(max\)]\). Here \[Alpha]=1/|f[r[t]]| \ for normalization. To get verbose output add debug=True to the \ parameter list."; fieldSolve[field_, varlist_, xi0_, tmax_, debug_: False] := Module[ {xiVec, equationSet, t}, If[Length[varlist] != Length[xi0], Print["Number of variables must equal number of initial conditions\ \nUSAGE:\n" <> fieldSolve::usage]; Abort[]]; xiVec = Through[varlist[t]]; (* Below, Simplify[equationSet] would cost extra time and doesn't help with the numerical solution, so don't try to simplify. *) equationSet = Join[ Thread[ Map[D[#, t] &, xiVec] == Normalize[field /. Thread[varlist -> xiVec]] ], Thread[ (xiVec /. t -> 0) == xi0 ] ]; If[debug, Print[Row[{"Numerically solving the system of equations\n\n", TraditionalForm[(Simplify[equationSet] /. t -> "t") // TableForm]}]]]; (* This is where the differential equation is solved. The Quiet[] command suppresses warning messages because numerical precision isn't crucial for our plotting purposes: *) Map[Head, First[xiVec /. Quiet[NDSolve[ equationSet, xiVec, {t, 0, tmax} ]]], 2] ] fieldLinePlot[field_, varList_, seedList_, opts : OptionsPattern[]] := Module[{sols, localVars, var, localField, plotOptions, tubeFunction, tubePlotStyle, postProcess = {}}, plotOptions = FilterRules[{opts}, Options[ParametricPlot3D]]; tubeFunction = OptionValue["TubeFunction"]; If[tubeFunction =!= None, tubePlotStyle = Cases[OptionValue[PlotStyle], Except[_Tube]]; plotOptions = FilterRules[plotOptions, Except[{PlotStyle, ColorFunction, ColorFunctionScaling}]]; postProcess = Line[x_] :> Join[tubePlotStyle, {CapForm["Butt"], Tube[x, tubeFunction @@@ x]}] ]; If[Length[seedList[[1, 1]]] != Length[varList], Print["Number of variables must equal number of initial \ conditions\nUSAGE:\n" <> fieldLinePlot::usage]; Abort[]]; localVars = Array[var, Length[varList]]; localField = ReleaseHold[ Hold[field] /. Thread[Map[HoldPattern, Unevaluated[varList]] -> localVars]]; (*Assume that each element of seedList specifies a point AND the \ length of the field line:*)Show[ ParallelTable[ ParametricPlot3D[ Evaluate[ Through[#[t]]], {t, #[[1, 1, 1, 1]], #[[1, 1, 1, 2]]}, Evaluate@Apply[Sequence, plotOptions] ] &[fieldSolve[ localField, localVars, seedList[[i, 1]], seedList[[i, 2]] ] ] /. postProcess, {i, Length[seedList]} ] ] ]; Options[fieldLinePlot] = Append[Options[ParametricPlot3D], "TubeFunction" -> None]; SyntaxInformation[fieldLinePlot] = {"LocalVariables" -> {"Solve", {2, 2}}, "ArgumentsPattern" -> {_, _, _, OptionsPattern[]}}; SetAttributes[fieldSolve, HoldAll]; The main function is fieldLinePlot , but I split it into two functions to be more modular. Also, the problem of where to start drawing the field lines is treated separately because it depends a lot on the particular application. fieldSolve[f,x,x0,Subscript[t, max]] symbolically takes a vector field f with respect to the vector variable x , and then finds a vector curve r[t] starting at the point x0 satisfying the equation dr/dt = α f[r[t]] for t=0...tmax . Here α = 1/|f[r[t]]| for normalization. To get verbose output add debug=True to the parameter list. fieldLinePlot[field,varlist,seedList] plots 3D field lines of a vector field (first argument) that depends on the symbolic variables in varlist . The starting points for these variables are provided in seedList. Each element of seedList={{p1, T1},{p2, T2}...} is a tuple where pi is the starting point of the $i^\mathrm{th}$ field line and Ti is the length of that field line in both directions from Pi . Here are some examples: 1) Coulomb field of two opposite charges at $\vec{r} = \vec{0}$ and $\vec{r} = (1, 1, 1)$: Look at the form of seedList to see how the field line starting points and lengths are specified. seedList = With[{vertices = .1 N[PolyhedronData["Icosahedron"][[1, 1]]]}, Join[Map[{#, 2} &, vertices], Map[{# + {1, 1, 1}, -2} &, vertices]]]; Show[fieldLinePlot[{x, y, z}/ Norm[{x, y, z}]^3 - ({x, y, z} - {1, 1, 1})/ Norm[{x, y, z} - {1, 1, 1}]^3, {x, y, z}, seedList, PlotStyle -> {Orange, Specularity[White, 16], Tube[.01]}, PlotRange -> All, Boxed -> False, Axes -> None], Background -> Black] 2) Magnetic field of an infinite straight wire: With[{seedList = Table[{{x, 0, 0}, 6.5}, {x, .1, 1, .1}] }, Show[fieldLinePlot[{-y, x, 0}/(x^2 + y^2), {x, y, z}, seedList, PlotStyle -> {Orange, Specularity[White, 16], Tube[.01]}, PlotRange -> All, Boxed -> False, Axes -> None], Graphics3D@Tube[{{0, 0, -.5}, {0, 0, .5}}], Background -> Black]] Edit: added variable line thickness to represent field strength The field lines can be given a color that scales with the field strength (the norm of the vector field along the lines), by specifying a ColorFunction in fieldLinePlot . For example, if the vector field has been defined as a function f2 of variables x,y,z , then you could add the option ColorFunctionScaling -> False, ColorFunction -> Function[{x,y,z,u}, Quiet@Hue[Clip[ Norm[f2[x,y,z]],{0,20}]/20]] as I mention in the comment section. In this new edit, I added the ability to encode the field strength in the thickness of the field lines instead. This required adding a new option "TubeFunction" which works similarly to ColorFunction . It is a function of the three coordinates x,y,z and returns the radius of the tube representing the field line at that point. To calculate this radius in the examples below, I take the (unscaled) value of the field and get its Norm . Then I scale and constrain it to a reasonable range so that the thickness variations of the field lines don't look too grotesque: 3) Same Coulomb field as above, but with varying field line thickness f2[x_, y_,z_] := {x, y, z}/Norm[{x, y, z}]^3 - ({x, y, z} - {1, 1, 1})/ Norm[{x, y, z} - {1, 1, 1}]^3 seedList = With[{vertices = .1 N[PolyhedronData["Icosahedron"][[1, 1]]]}, Join[Map[{#, 2} &, vertices], Map[{# + {1, 1, 1}, -2} &, vertices]]]; fieldLinePlot[f2[x, y, z], {x, y, z}, seedList, PlotStyle -> {Orange, Specularity[White, 16]}, PlotRange -> All, Boxed -> False, Axes -> None, "TubeFunction" -> Function[{x, y, z}, Quiet[Clip[Norm[f2[x, y, z]], {2, 40}]/200]], Background -> Black] 4) Same magnetic field as above, this time with varying line thickness f3[x_, y_, z_] := {-y, x, 0}/(x^2 + y^2) With[{seedList = Table[{{x, 0, 0}, 6.5}, {x, .1, 1, .1}]}, Show[fieldLinePlot[f3[x, y, z], {x, y, z}, seedList, PlotStyle -> {Cyan, Specularity[White, 16]}, PlotRange -> All, Boxed -> False, Axes -> None, "TubeFunction" -> Function[{x, y, z}, Quiet[Clip[Norm[f3[x, y, z]], {1, 40}]/200]]], Graphics3D@Tube[{{0, 0, -.5}, {0, 0, .5}}], Background -> Black]]
{ "source": [ "https://mathematica.stackexchange.com/questions/687", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/202/" ] }
702
New in Mathematica 8 is FilledCurve (and its cousin JoinedCurve ). The docs state that this function can take a list of segments or components, each in the form of a Line , a BezierCurve , or a BSplineCurve . But the examples show alteration of a glyph, with a resulting JoinedCurve that has a different pattern. Here is a simplified example: ImportString[ExportString[ Style["\[FilledCircle]", Bold, FontFamily -> "Helvetica", FontSize -> 12], "PDF"],"TextMode" -> "Outlines"][[1, 1, 2, 1, 1]] gives FilledCurve[{{{1, 4, 3}, {1, 3, 3}, {1, 3, 3}, {1, 3, 3}}}, {{{10.9614, 7.40213}, {10.9614, 10.2686}, {8.51663, 12.3137}, {5.79394, 12.3137}, {3.05663, 12.3137}, {0.641063, 10.2686}, {0.641063, 7.40213}, {0.641063, 4.53319}, {3.05663, 2.48813}, {5.79394, 2.48813}, {8.53125, 2.48813}, {10.9614, 4.51856}, {10.9614, 7.40213}}}] The second argument is a list of points representing the glyph. The first list is some representation of smoothing the boundary given by the second list. For example, displaying the points alone as connected line segments Graphics@Line[ImportString[ExportString[ Style["\[FilledCircle]", Bold, FontFamily -> "Helvetica", FontSize -> 12], "PDF"], "TextMode" -> "Outlines"][[1, 1, 2, 1, 1, 2, 1]]] gives Compare that with the JoinedCurve version: Graphics[ImportString[ ExportString[Style["\[FilledCircle]", Bold, FontFamily -> "Helvetica", FontSize -> 12], "PDF"], "TextMode" -> "Outlines"][[1, 1, 2, 1, 1]] /. FilledCurve[a__] :> JoinedCurve[a]] What exactly do the triples in the first list represent? How can I reconstruct the smooth circle using other Graphics primitives, such as BSplineCurve ? My motivation is to be able to extract the curves from 2D glyphs and plot them in Graphics3D[] (e.g., with the replacement {x_Real,y_Real}:>{Cos[theta]x,Sin[theta]x,y} ), since FilledCurve[] only works with Graphics[] .
The first element in the triples seems to indicate the type of curve used for the segment where 0 indicates a Line , 1 a BezierCurve , and 3 a BSplineCurve . I haven't figured out yet what 2 does. Edit : When the first element of the triple is 2 , the segment will be a BezierCurve similar to option 1 except that with option 2 , an extra control point is added to the list to make sure that the current segment is tangential to the previous segment. The second digit indicates how many points to use for the segment, and the last digit the SplineDegree . To convert the FilledCurve to a list of Graphics primitives, you could therefore do something like conversion[curve_] := Module[{ff}, ff[i_, pts_, deg_] := Switch[i, 0, Line[Rest[pts]], 1, BezierCurve[Rest[pts], SplineDegree -> deg], 2, BezierCurve[ Join[{pts[[2]], 2 pts[[2]] - pts[[1]]}, Drop[pts, 2]], SplineDegree -> deg], 3, BSplineCurve[Rest[pts], SplineDegree -> deg] ]; Function[{segments, pts}, MapThread[ff, { segments[[All, 1]], pts[[Range @@ (#1 - {1, 0})]] & /@ Partition[Accumulate[segments[[All, 2]]], 2, 1, {-1, -1}, 1], segments[[All, 3]] } ] ] @@@ Transpose[List @@ curve] ] Then for the example in the original post, curve = FilledCurve[{{{1, 4, 3}, {1, 3, 3}, {1, 3, 3}, {1, 3, 3}}}, {{{10.9614, 7.40213}, {10.9614, 10.2686}, {8.51663, 12.3137}, {5.79394, 12.3137}, {3.05663, 12.3137}, {0.641063, 10.2686}, {0.641063, 7.40213}, {0.641063, 4.53319}, {3.05663, 2.48813}, {5.79394, 2.48813}, {8.53125, 2.48813}, {10.9614, 4.51856}, {10.9614, 7.40213}}}]; curve2 = conversion[curve] gives {BezierCurve[{{10.9614, 7.40213}, {10.9614, 10.2686}, {8.51663, 12.3137}, {5.79394, 12.3137}}, SplineDegree -> 3], BezierCurve[{{5.79394, 12.3137}, {3.05663, 12.3137}, {0.641063, 10.2686}, {0.641063, 7.40213}}, SplineDegree -> 3], BezierCurve[{{0.641063, 7.40213}, {0.641063, 4.53319}, {3.05663, 2.48813}, {5.79394, 2.48813}}, SplineDegree -> 3], BezierCurve[{{5.79394, 2.48813}, {8.53125, 2.48813}, {10.9614, 4.51856}, {10.9614, 7.40213}}, SplineDegree -> 3]} and Graphics[curve2] produces
{ "source": [ "https://mathematica.stackexchange.com/questions/702", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/63/" ] }
704
Every time I define a new function, I wonder which construct I should use: true functions obtained by using Function , or rule-based syntax. For example, these are two ways of defining a square function: square = Function[x, x^2] (* Alternatively: square = #^2& *) square2[x_] := x^2 Is there some fundamental difference (e.g. execution efficiency) between the two approaches?
square = Function[x, x^2]; square1 = #^2 &; square2[x_] := x^2; DownValues[square] DownValues[square1] DownValues[square2] {} {} {HoldPattern[square2[x_]] :> x^2} Two differences that immediately come to mind are that: 1) functions with down values won't autocompile when you use them in Table , Map , Nest etc. so therefore are less efficient when used that way. e.g. data = RandomReal[{0, 10}, {10000}]; Timing[Developer`PackedArrayQ[Map[square, data]]] Timing[Developer`PackedArrayQ[tmp1 = Map[square1, data]]] Timing[Developer`PackedArrayQ[tmp2 = Map[square2, data]]] {0.001404, True} {0.001498, True} {0.022324, False} despite tmp1 being packed and tmp2 being unpacked they are equal tmp1==tmp2 True but using the pure function gives you a packed list which means faster evaluations and less memory for storage: N@ByteCount[tmp2]/ByteCount[tmp1] 3.49456 This example used Map but you would observe the same thing with Table , Nest , Fold and so on. As to why this is the case (@Davids question) the only answer I have is the circular one that autocompilation using functions with down values hasn't been implemented. I haven't found out what the difficulties are in implementing this, i.e. whether it hasn't been done because it can't be or because it just hasn't been. Someone else may know and can comment. 2) functions with down values may (in all likelihood will) cause a security warning when present in an embedded CDF. I'm sure others will be able to expand on this and add many more differences.
{ "source": [ "https://mathematica.stackexchange.com/questions/704", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/103/" ] }
736
I quite often would like to draw graphics in my $\LaTeX$ documents using Mathematica. I have encountered three problems. I would like to know if there are any workarounds to these problems I would like to make my graphics homogeneous with my document. That means that I would like to use the same font in the graphics (labels for axis etc) as the main text. Mathematica does not support Computer Modern. I found a workaround using PSFrag, saving graphics as EPS. It is possible using PSfrag to rename the text in the graphic into $\LaTeX$ code. A big downside is that this method does not allow me to use pdflatex. Many other packages (hyperlink) therefore do not work. Graphics3D objects are extremely big. If I save it using a bitmap, the picture usually becomes horrible. I often would like to use transparency. If I use Opacity to make some part of the graphic transparent, the exported file in Mathematica is horrible.
There are a few different parts to your question. I'll just answer the part about using psfrag and pdflatex . There's a package called pstool that automates the whole process of using psfrag with pdflatex . For example, here's a graphics created in Mathematica 8 plot = Plot[Sin[Exp[x]], {x, -Pi, Pi}, AxesLabel -> {"e", "s"}] Export[NotebookDirectory[] <> "plot.eps", plot] Note the use of the single character names for the axes. This was discussed in the stackexchange question Mathematica 8.0 and psfrag . You can use psfrag on this image and compile straight to pdf using the following latex file \documentclass{article} \usepackage{pstool} \begin{document} \psfragfig{plot}{% \psfrag{e}{$\epsilon$} \psfrag{s}{$\Sigma$}} \end{document} Compile it using pdflatex --shell-escape filename.tex . You can optionally include a file plot.tex in the same directory which can contain all the psfrag code for plot.eps so that your main .tex file is tidier and the plot is more portable. Here's a screenshot of the graphics in the pdf file:
{ "source": [ "https://mathematica.stackexchange.com/questions/736", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/227/" ] }
745
I have this annoyance for a long time now: every Text cell uses the default font size, though it appears too small on my screen. See comparison of same text, same font (Times New Roman), same fontsize (12), same magnification (100%) on Word 2003 (left) and Mathematica 8.0.1 (right). The Screen Environment menu option is set to Working , though none of the other environments gives the proper visual size: some are even smaller (very hard to read), and e.g. Presentation gives enormous letters. I thought that a given fontsize should produce the same look-and-feel, though this is definitely not the case here. Of course I can set the fontsize larger in Mathematica, but then the problem manifests when I print the notebook (text will be too large). Has anyone else experienced this?
I might as well post my comment to Szabolcs as an answer. As Szabolcs noted, the default screen resolution in Mathematica is set to 72 dpi which might not agree with the actual resolution. You can change the screen resolution in the Option Inspector which can be found in the Format menu. Set "Show option values" to "Global preferences" to change Front End settings permanently or set it to "Selected Notebook" to apply them to only the current notebook. Then just search for ScreenResolution in the search box. The relevant option is the one called "ScreenResolution" with quotation marks. You can also find it via Formatting Options > Font Options > FontProperties > "ScreenResolution" . It's set to 72 by default as Szabolcs figured out. By the way, I found that on OS X at least, to change a value in the option inspector I need to click on the value and hover over the selection with my mouse cursor for a few seconds until it goes into edit mode, but it might be different on Windows. You can try out using the system dpi temporarily by evaluating: SetOptions[$FrontEndSession, FontProperties -> {"ScreenResolution" -> Automatic}] (This will revert to the previous value after the Front End is closed.) You do this for the current notebook using SetOptions[EvaluationNotebook[], FontProperties -> {"ScreenResolution" -> Automatic}] Alternatively, If it's just the "Text" style that is too small, you could change the default text font in the style sheet you're using. In order to do this, go to Format > Edit Stylesheet... and type Text in the text field. Select the newly created cell, change the size in Format > Size to whatever you want, and close the stylesheet editor. All text cells in your notebook should now use the updated font size by default.
{ "source": [ "https://mathematica.stackexchange.com/questions/745", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
761
Example: I have a matrix $R = \left( \begin{array}{cc} A & \mathbf{t} \\ 0 & 1 \end{array} \right) $ where $A$ is 3-by-3 and $\mathbf{t}$ is 3 by 1. Or in Mathematica A={{1,0,0},{0,0,1},{0,-1,0}}; t={1,1,1} I would like to be able to use a form of block matrix notation / entry and subsequently find the inverse of R. Question: Is this possible?
You're looking for ArrayFlatten . For your example matrices, R = ArrayFlatten[ {{A, {t}\[Transpose]},{0, 1}} ] (* => {{1, 0, 0, 1}, {0, 0, 1, 1}, {0, -1, 0, 1}, {0, 0, 0, 1}} *) The construct {t}\[Transpose] is necessary for ArrayFlatten to treat t as a column matrix. Then to find $\boldsymbol{R}^{-1}$, you run Inverse[R] (* => {{1, 0, 0, -1}, {0, 0, -1, 1}, {0, 1, 0, -1}, {0, 0, 0, 1}} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/761", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }
771
The best way I can come up with to say "show me this fraction as a decimal number to M places, don't use scientific notation" is: NumberForm[N[1/998001,2994],ExponentFunction->(Null&)] It seems like that's an awful lot of typing for a very simple request. Is it possible to say it more concisely?
You can express any fraction/number to arbitrary decimal places by using a backtick followed by number of digits required. For example: In[1]:= 4/3`20 Out[1]= 1.3333333333333333333 This is the same as N[4/3, 20] . Now combine this with AccountingForm , which never uses scientific notation to get the output that you desire. AccountingForm[1/998001`2994] Out[2]//AccountingForm= 0.0000010020030040050060070080090100110120130140... However, be aware that AccountingForm uses parentheses for negative numbers: AccountingForm[-1/998001`2994] Out[3]//AccountingForm= (0.00000100200300400500600700800901001101201301401501601.... Daniel Lichtblau has a good point that although using ` instead of N might be shorter in this case, in general, it might not give the same result — for example, compare the digits of Log[2`50] and N[Log[2],50] . You'll see that they differ in the last couple of digits. However, for small use cases, the difference might be insignificant.
{ "source": [ "https://mathematica.stackexchange.com/questions/771", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/178/" ] }
783
I am trying to convert a list of string names into symbols, which will then be used to store data. I have 24 files (where the name of each file is a member of the list mentioned above) that I need to process, which is why I am trying to accomplish my goal progammatically. The code below results in a Tag warning. I have read the normal documentation sources, and appreciate that you can not inappropriately assign something to a protected symbol, but this doesn't help me solve the problem. Would you have any advice how to accomplish my goal? Table[(ToExpression[mmsignalnames[[i]]] = Extract[ToExpression[celfilenames[[i]]], mmammindices[[j]]]), {i, 1, Length[mmsignalnames]}, {j, 1, Length[mmammindices]}] Set::write: Tag ToExpression in ToExpression[mmsignalGSM356796] is Protected. >>
One solution is to use the third argument of ToExpression : With minimal modification, a working version of your code would look like this: Table[ ToExpression[ mmsignalnames[[i]], InputForm, Function[name, name = Extract[ToExpression[celfilenames[[i]]], mmammindices[[j]]], HoldAll]], {i, Length[mmsignalnames]}, {j, Length[mmammindices]}] (Untested because I don't have your data; but see below for the main idea and a small demonstration.) The core of the method is this: ToExpression["a", InputForm, Function[name, name = 1, HoldAll]] ToExpression will wrap the result into its third argument before evaluating it. We can make the third argument a function that sets a value to the symbol (in this simple example it always sets the value to 1 ). HoldAll is needed to make sure the symbol won't evaluate when it is passed to the function. You might find all the evaluation control I'm using here a bit confusing. To learn how to work with unevaluated expressions, I recommend reading Working with Unevaluated Expressions by Robby Villegas It is one of the best tutorials on the matter. Finally, after answering your actual question, I'd like to suggest you use a hash table instead of symbols: Instead of creating symbols from the strings "a" , "b" , "c" , ..., and assigning to them, you could assign to myTable["a"] , myTable["b"] , ... This will make programmatic access to this data trivial. You won't need to bother with evaluation control nearly as much. And more importantly, you can avoid accidental name collisions with existing symbols. Here's an example: (myTable[#] = 1) & /@ {"a", "b", "c"}
{ "source": [ "https://mathematica.stackexchange.com/questions/783", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/249/" ] }
805
There have already been some questions about some undocumented functionality in Mathematica . Such as ( please add to these lists! ) How can one find undocumented options or option values in Mathematica? What do these undocumented Style options in Mathematica do? Undocumented command-line options Also, other questions and answers that contained undocumented functions Internal`InheritedBlock (also in Exposing Symbols to $ContextPath ) Internal`Bag (in Implementing a Quadtree in Mathematica ) (also here ) RuleCondition (in Replace inside Held expression ) Along with the "Undocumented (or scarcely documented) Features" segment of the What is in your Mathematica tool bag? question. Szabolcs also maintains a list of Mathematica tricks which contains a list of "undocumented stuff", which can now be found archived here . So, what undocumented functions do you know and how do you use them? (Added useful information is maybe how you discovered the functions and any version dependence.)
Thinking about a recent answer made me wonder exactly which functions in Mathematica use Assumptions . You can find the list of System` functions that use that Option by running Reap[Do[Quiet[If[Options[Symbol[i], Assumptions]=!={}, Sow[i], Options::optnf]], {i, DeleteCases[Names["System`*"], _?(StringMatchQ[#, "$"~~__] &)]}]][[2, 1]] which (can be more elegantly written using list comprehension and) returns (in version 8) {"ContinuedFractionK", "Convolve", "DifferenceDelta", "DifferenceRootReduce", "DifferentialRootReduce", "DirichletTransform", "DiscreteConvolve", "DiscreteRatio", "DiscreteShift", "Expectation", "ExpectedValue", "ExponentialGeneratingFunction", "FinancialBond", "FourierCoefficient", "FourierCosCoefficient", "FourierCosSeries", "FourierCosTransform", "FourierSequenceTransform", "FourierSeries", "FourierSinCoefficient", "FourierSinSeries", "FourierSinTransform", "FourierTransform", "FourierTrigSeries", "FullSimplify", "FunctionExpand", "GeneratingFunction", "Integrate", "InverseFourierCosTransform", "InverseFourierSequenceTransform", "InverseFourierSinTransform", "InverseFourierTransform", "InverseZTransform", "LaplaceTransform", "Limit", "PiecewiseExpand", "PossibleZeroQ", "PowerExpand", "Probability", "ProbabilityDistribution", "Product", "Refine", "Residue", "Series", "SeriesCoefficient", "Simplify", "Sum", "SumConvergence", "TimeValue", "ToRadicals", "TransformedDistribution", "ZTransform"} You can similarly look for functions that take assumptions that are not in the System` context and the main ones you find are in Names["Developer`*Simplify*"] which are (adding "Developer`" to the context path) {"BesselSimplify", "FibonacciSimplify", "GammaSimplify", "HolonomicSimplify", "PolyGammaSimplify", "PolyLogSimplify", "PseudoFunctionsSimplify", "ZetaSimplify"} These are all specialized simplification routines that are not called by Simplify but are called by FullSimplify . However, sometimes FullSimplify can take too long on large expressions and I can imagine calling these specialized routines would be useful. Here's a simple usage example In[49]:= FunctionsWolfram["10.08.17.0012.01"] /. Equal -> Subtract // Simplify % // Developer`PolyLogSimplify Out[49]= -Pi^2/6 + Log[1 - z] Log[z] + PolyLog[2, 1 - z] + PolyLog[2, z] Out[50]= 0 (The FunctionsWolfram code is described here ) Another interesting assumption related context I noticed was Assumptions` . Once again, appending "Assumptions`" to the $ContextPath , Names["Assumptions`*"] returns the functions {"AAlgebraicQ", "AAssumedIneqQ", "AAssumedQ", "ABooleanQ", "AComplexQ", "AEvaluate", "AEvenQ", "AImpossibleIneqQ", "AInfSup", "AIntegerQ", "AllAssumptions", "AMathIneqs", "AMod", "ANegative", "ANonNegative", "ANonPositive", "AOddQ", "APositive", "APrimeQ", "ARationalQ", "ARealIfDefinedQ", "ARealQ", "ASign", "AssumedFalse", "AUnequalQ", "AWeakSign", "ImpliesQ"} These contain assumption aware versions of some standard system functions, e.g. In[22]:= Assuming[Element[x, Integers], {IntegerQ[x], AIntegerQ[x]}] Assuming[x > 0, {Positive[x], APositive[x]}] Out[22]= {False, True} Out[23]= {Positive[x], True}
{ "source": [ "https://mathematica.stackexchange.com/questions/805", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/34/" ] }
809
Mathematica has a lot of undocumented or poorly documented options. How does one go about working out if there is an undocumented option that might solve a particular problem? How does one work out what the universe of possible values the option might take are? (This also applies to options whose existence is documented, but where the range of valid values isn't.) As background, here is a collection of ones I and others have found so far: Graphics The Method option is an option for Graphics and related commands like BarChart . It is mentioned in the notes in the documentation and turns up in Options[Graphics] but is not listed as an option in the documentation with any examples. There are many sub-options, none of which are explicitly documented. Method -> {"ShrinkWrap" -> True} removes whitespace that Mathematica adds as a tiny rim to each plot. (pointed out by Sjoerd) Method -> {"GridLinesInFront" -> True} does what it says (see Brett Champion's answer to this question and this MathGroup thread ). Dynamic GridLines using this option are present in much of the graphical Wolfram|Alpha output (See, e.g. the edited number line code ). Method -> {"AxesInFront" -> True} also does what it says. (see TomD's comment on Brett's answer ) Method -> {Refinement -> {ControlValue -> angle}} sets the angle that decides when two points in a plot are not further subdivided - default is 5\[Degree] . (see Yaro's answer here and the relevant page in Stan Wagon's book . Also, a Plot version comparison by Alexey ) The option "MessagesHead" is used to track the origin of calls to Plot , etc., made by dependent plot functions such as LogPlot , LogLinearPlot , and DateListLogPlot . This allows the correct options and messages to be passed to and from the general function. An example of its use can be seen in this question . ImageSizeRaw option for various plotting and graphics functions is not documented, but turns out to be important for embedding CDFs into web pages . PrivateFontOptions -> {"OperatorSubstitution" -> False} , as documented here , stops minus signs, parentheses and the like from being in Mathematica 's special fonts rather than the selected text font. s0rce discovered that the ScalingFunctions option works for line plots ( ListPlot , Plot , etc). Possible values include "Reverse" , "Log" "Log10" – the last of these being itself undocumented. Not strictly a graphics function, but often used to create nice-looking ticks, FindDivisions has an undocumented Method option: for example, FindDivisions[{-1.8,8.9}, 6, Method -> "ExtendRange"] gives the encompassing divisions {-2, 0, 2, 4, 6, 8, 10} , as does Method -> Automatic ; any other setting for Method gives the inner divisions: {0, 2, 4, 6, 8} . You can control the amount a PieChart segment pops out of the chart when you click it using SetOptions[Charting'iSectorChart, {PopoutSpacing -> n}] , where n is numeric. The default is 0.2; for fun, try a negative number. You can suppress this behaviour altogether using SetOptions[Charting'SectorChart, {Popout -> False}] (in both these examples, change the quote mark to a backquote). For some Plot functions the setting for the PlotStyle option can be specified as a function as well as a list of graphics directives. The earliest reference for this undocumented feature is this answer by Simon Woods . Additional examples of this PlotStyle usage for Plot and ParametricPlot are: this , this , this , and this . Panels As noted in an earlier question , these options pop up in some graphics/panels, but are unrecognised when one uses them explicitly in Panel , Graphics or related structures: LineColor FrontFaceColor BackFaceColor GraphicsColor Legends There seem to be a lot of undocumented options here: AssembleLegendContainer BubbleScaleLegend ColorGradientLegend ContourLegend CurveLegend GridLegend Legend LegendContainer : SetOptions[Legending`GridLegend, Legending`LegendContainer -> Identity] removes the border from legends (thanks to Mr.Wizard) LegendHeading LegendImage LegendItemLayout LegendLayout LegendPane LegendPosition LegendReap Legends LegendSize LegendSow Equation-solving and minimisation/optimisation Evaluated -> False option of FindRoot (TomD in comments) System options for evaluation Per acl's answer below, SystemOptions[] reveals many hidden options using the following syntax. These can be set using SetSystemOptions[] . "PackedArrayOptions" /. SystemOptions[] "CompileOptions" /. SystemOptions[] Although this book by Nancy Blachman has been written for Version 2, it is still not a bad starting point: http://www.amazon.com/Mathematica-Quick-Reference-Version-Spiral/dp/0201628805
One thing you can do is look for options which appear in a function's Options but do not have a ::usage message. Of course, some of the results actually are documented in the help, they just don't have a usage message. Here's a function to do it: undoc[x_Symbol]:=Select[Options[x],!StringQ@MessageName[Evaluate@First@#,"usage"]&]; undoc[_] = {}; (* e.g. *) undoc[Plot] Out[3]= {Evaluated->Automatic,ImageSizeRaw->Automatic} The following runs this function on all symbols in System context, and presents the results in a grid. Some functions (like Cell ) have huge lists of options with no usage message, these ones I skip over (just printing out the function name) to save space. Grid[Select[{#,undoc[Symbol[#]]}&/@Names["System`*"], Last@#=!={}&&(Length@Last@#<10||Print@First@#)&],Frame->All]
{ "source": [ "https://mathematica.stackexchange.com/questions/809", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8/" ] }
821
Essentially what it says in the title. Mathematica can export its code to C . How much overhead does that inflict on the code, as compared to writing it from scratch in C?
A lot depends on how you write your code in Mathematica. In my experience, the rule of thumb is that the generated code will be efficient if the code inside Compile more or less resembles the code I would write in plain C (and it is clear why). Idiomatic (high-level) Mathematica code tends to be immutable. At the same time, Compile can handle a number of higher-level functions, such as Transpose , Partition , Map , MapThread , etc. Most of these functions return expressions, and even though these expressions are probably passed to the calling function, they must be created. For example, a call to ReplacePart which replaces a single part in a large array will necessarily lead to copying of that array. Thus, immutability generally implies creating copies. So, if you write your code in this style and hand it to Compile , you have to keep in mind that lots of small (or large) memory allocations on the heap, and copying of lists (tensors) will be happening. Since this is not apparent for someone who is used to high-level Mathematica programming, the slowdown this may incur may be surprising. See this and this answers for examples of problems coming from many small memory allocations and copying, as well as a speed-up one can get from switching from copying to in-place modifications. As noted by @acl, one thing worth doing is to set the SystemOptions -> "CompileOptions" as SetSystemOptions[ "CompileOptions" -> "CompileReportExternal" -> True] in which case you will get warnings for calling external functions etc. A good tool to get a "high-level" but precise view on the generated code is the CompilePrint function in the CompiledFunctionTools` package. It allows you to print the pseudocode version of the byte-code instructions generated by Compile . Things to watch for in the printout of CompilePrint function: Calls to CopyTensor Calls to MainEvaluate (callbacks to Mathematica, meaning that something could not be compiled down to C) One not very widely known technique of writing even large Compile -d functions and combining them from pieces so that there is no performance penalty, is based on inlining. I consider this answer very illustrative in this respect - I actually posted it to showcase the technique. You can also see this answer and a discussion in the comments below, for another example of how this technique may be applied. In summary - if you want your code to be as fast as possible, think about "critical" places and write those in "low-level" style (loops, assignments, etc) - the more it will resemble C the more chances you have for a speed-up (for an example of a function written in such a style and being consequently very fast, see the seqposC function from this answer ). You will have to go against Mathematica ideology and use a lot of in-place modifications. Then your code can be just as fast as hand-written one. Usually, there are just a few places in the program where this matters (inner loops, etc) - in the rest of it you can use higher-level functions as well.
{ "source": [ "https://mathematica.stackexchange.com/questions/821", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/143/" ] }
838
Let's say I want to answer the question "what are the first 400 palindromic prime numbers?" The first approach that comes to my mind from the set of languages that I know is to use some sort of lazy list materialization, a la IEnumerable (and yield ) in C# , generators in Python , or sequence blocks in F# . For example, in C#: PrimesEnumerator().Where(n => n.ToString() == n.ToString().Reverse()).Take(400); This would cause the PrimesGenerator to be pumped for primes long enough for the Where() clause to find enough numbers that meet the requirement for Take() to meet its quota. The best I've come up with in Mathematica is something like: i = 1; results = List[]; While[Length[results] != 400, If[IntegerDigits[Prime[i]] == Reverse[IntegerDigits[Prime[i]]], results = Append[results,Prime[i]]]; i = i + 1] It surprises me that I end up writing in such an imperative style in Mathematica. Am I missing something that would let me write this entirely functionally? Maybe even with lazy lists? Update: I took inspiration from WReach's work of art answer , and made a package that took his ideas and expanded them into a broad, general solution for lazy data in Mathematica. I describe its usage in an answer below .
A "lazy list", "functional style" solution to this problem might look something like this: sIntegers[] ~sMap~ Prime ~sFilter~ palindromicQ ~sTake~ 400 // sList No such notation is built into Mathematica. However, creating such notations is Mathematica's strong suit. Let's do it. First, we need to define the notion of a "stream". Streams are inherently lazy, so let's use HoldAll : SetAttributes[stream, {HoldAll}] A stream can be empty: sEmptyQ[stream[]] := True ... or it can be non-empty, having two elements: sEmptyQ[stream[_, _]] = False; The first element of the stream is called the "head": sHead[stream[h_, _]] := h The remaining elements of the stream are called the "tail": sTail[stream[_, t_]] := t Armed with these definitions, we can now express an infinite stream of integers thus: sIntegers[n_:1] := With[{nn = n+1}, stream[n, sIntegers[nn]]] sIntegers[] // sEmptyQ (* False *) sIntegers[] // sHead (* 1 *) sIntegers[] // sTail // sHead (* 2 *) sIntegers[] // sTail // sTail // sHead (* 3 *) Infinite streams are difficult to display in a notebook. Let's introduce sTake which truncates a stream to a fixed length: sTake[s_stream, 0] := stream[] sTake[s_stream, n_] /; n > 0 := With[{nn = n-1}, stream[sHead[s], sTake[sTail[s], nn]]] Let's also introduce sList , which converts a (finite) stream into a list: sList[s_stream] := Module[{tag} , Reap[ NestWhile[(Sow[sHead[#], tag]; sTail[#])&, s, !sEmptyQ[#]&] , tag ][[2]] /. {l_} :> l ] Now we can inspect an integer stream directly: sIntegers[] ~sTake~ 10 // sList (* {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} *) sMap applies a function to every element of a stream: sMap[stream[], _] := stream[] sMap[s_stream, fn_] := stream[fn[sHead[s]], sMap[sTail[s], fn]] sIntegers[] ~sMap~ Prime ~sTake~ 10 // sList (* {2, 3, 5, 7, 11, 13, 17, 19, 23, 29} *) sFilter selects elements from a stream that satisfy a given filter predicate: sFilter[s_, pred_] := NestWhile[sTail, s, (!sEmptyQ[#] && !pred[sHead[#]])&] /. stream[h_, t_] :> stream[h, sFilter[t, pred]] sIntegers[] ~sFilter~ OddQ ~sTake~ 15 // sList (* {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29} *) We now have almost all of the pieces in place to address the original problem. All that is missing is a predicate that detects palindromic numbers: palindromicQ[n_] := IntegerDigits[n] /. d_ :> d === Reverse[d] palindromicQ[123] (* False *) palindromicQ[121] (* True *) Now, we can solve the problem: sIntegers[] ~sMap~ Prime ~sFilter~ palindromicQ ~sTake~ 400 // sList (* {2,3,5,7,11,101, ... ,3528253,3541453,3553553,3558553,3563653,3569653} *) The stream facility we have defined here is very basic. It lacks error checking, and further consideration should be given to optimization. However, it demonstrates the power of Mathematica's symbolic programming paradigm. The following listing gives the complete set of definitions: ClearAll[stream] SetAttributes[stream, {HoldAll, Protected}] sEmptyError[] := (Message[stream::empty]; Abort[]) stream::empty = "Attempt to access beyond the end of a stream."; ClearAll[sEmptyQ, sHead, sTail, sTake, sList, sMap, sFilter, sIntegers] sEmptyQ[stream[]] := True sEmptyQ[stream[_, _]] = False; sHead[stream[]] := sEmptyError[] sHead[stream[h_, _]] := h sTail[stream[]] := sEmptyError[] sTail[stream[_, t_]] := t sTake[s_stream, 0] := stream[] sTake[s_stream, n_] /; n > 0 := With[{nn = n-1}, stream[sHead[s], sTake[sTail[s], nn]]] sList[s_stream] := Module[{tag} , Reap[ NestWhile[(Sow[sHead[#], tag]; sTail[#])&, s, !sEmptyQ[#]&] , tag ][[2]] /. {l_} :> l ] sMap[stream[], _] := stream[] sMap[s_stream, fn_] := stream[fn[sHead[s]], sMap[sTail[s], fn]] sFilter[s_, pred_] := NestWhile[sTail, s, (!sEmptyQ[#] && !pred[sHead[#]])&] /. stream[h_, t_] :> stream[h, sFilter[t, pred]] sIntegers[n_:1] := With[{nn = n+1}, stream[n, sIntegers[nn]]] palindromicQ[n_] := IntegerDigits[n] /. d_ :> d === Reverse[d]
{ "source": [ "https://mathematica.stackexchange.com/questions/838", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/178/" ] }
844
I want to borrow the vast amount of packages of R. I know there was one but it is neither sold nor supported anymore. So are there any active open source projects for linking R with Mathematica? Thank you.
A "lazy list", "functional style" solution to this problem might look something like this: sIntegers[] ~sMap~ Prime ~sFilter~ palindromicQ ~sTake~ 400 // sList No such notation is built into Mathematica. However, creating such notations is Mathematica's strong suit. Let's do it. First, we need to define the notion of a "stream". Streams are inherently lazy, so let's use HoldAll : SetAttributes[stream, {HoldAll}] A stream can be empty: sEmptyQ[stream[]] := True ... or it can be non-empty, having two elements: sEmptyQ[stream[_, _]] = False; The first element of the stream is called the "head": sHead[stream[h_, _]] := h The remaining elements of the stream are called the "tail": sTail[stream[_, t_]] := t Armed with these definitions, we can now express an infinite stream of integers thus: sIntegers[n_:1] := With[{nn = n+1}, stream[n, sIntegers[nn]]] sIntegers[] // sEmptyQ (* False *) sIntegers[] // sHead (* 1 *) sIntegers[] // sTail // sHead (* 2 *) sIntegers[] // sTail // sTail // sHead (* 3 *) Infinite streams are difficult to display in a notebook. Let's introduce sTake which truncates a stream to a fixed length: sTake[s_stream, 0] := stream[] sTake[s_stream, n_] /; n > 0 := With[{nn = n-1}, stream[sHead[s], sTake[sTail[s], nn]]] Let's also introduce sList , which converts a (finite) stream into a list: sList[s_stream] := Module[{tag} , Reap[ NestWhile[(Sow[sHead[#], tag]; sTail[#])&, s, !sEmptyQ[#]&] , tag ][[2]] /. {l_} :> l ] Now we can inspect an integer stream directly: sIntegers[] ~sTake~ 10 // sList (* {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} *) sMap applies a function to every element of a stream: sMap[stream[], _] := stream[] sMap[s_stream, fn_] := stream[fn[sHead[s]], sMap[sTail[s], fn]] sIntegers[] ~sMap~ Prime ~sTake~ 10 // sList (* {2, 3, 5, 7, 11, 13, 17, 19, 23, 29} *) sFilter selects elements from a stream that satisfy a given filter predicate: sFilter[s_, pred_] := NestWhile[sTail, s, (!sEmptyQ[#] && !pred[sHead[#]])&] /. stream[h_, t_] :> stream[h, sFilter[t, pred]] sIntegers[] ~sFilter~ OddQ ~sTake~ 15 // sList (* {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29} *) We now have almost all of the pieces in place to address the original problem. All that is missing is a predicate that detects palindromic numbers: palindromicQ[n_] := IntegerDigits[n] /. d_ :> d === Reverse[d] palindromicQ[123] (* False *) palindromicQ[121] (* True *) Now, we can solve the problem: sIntegers[] ~sMap~ Prime ~sFilter~ palindromicQ ~sTake~ 400 // sList (* {2,3,5,7,11,101, ... ,3528253,3541453,3553553,3558553,3563653,3569653} *) The stream facility we have defined here is very basic. It lacks error checking, and further consideration should be given to optimization. However, it demonstrates the power of Mathematica's symbolic programming paradigm. The following listing gives the complete set of definitions: ClearAll[stream] SetAttributes[stream, {HoldAll, Protected}] sEmptyError[] := (Message[stream::empty]; Abort[]) stream::empty = "Attempt to access beyond the end of a stream."; ClearAll[sEmptyQ, sHead, sTail, sTake, sList, sMap, sFilter, sIntegers] sEmptyQ[stream[]] := True sEmptyQ[stream[_, _]] = False; sHead[stream[]] := sEmptyError[] sHead[stream[h_, _]] := h sTail[stream[]] := sEmptyError[] sTail[stream[_, t_]] := t sTake[s_stream, 0] := stream[] sTake[s_stream, n_] /; n > 0 := With[{nn = n-1}, stream[sHead[s], sTake[sTail[s], nn]]] sList[s_stream] := Module[{tag} , Reap[ NestWhile[(Sow[sHead[#], tag]; sTail[#])&, s, !sEmptyQ[#]&] , tag ][[2]] /. {l_} :> l ] sMap[stream[], _] := stream[] sMap[s_stream, fn_] := stream[fn[sHead[s]], sMap[sTail[s], fn]] sFilter[s_, pred_] := NestWhile[sTail, s, (!sEmptyQ[#] && !pred[sHead[#]])&] /. stream[h_, t_] :> stream[h, sFilter[t, pred]] sIntegers[n_:1] := With[{nn = n+1}, stream[n, sIntegers[nn]]] palindromicQ[n_] := IntegerDigits[n] /. d_ :> d === Reverse[d]
{ "source": [ "https://mathematica.stackexchange.com/questions/844", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/220/" ] }
845
Since Internal`Bag , Internal`StuffBag and Internal`BagPart can be compiled down, it is a precious source for various applications. There were already many questions why AppendTo is so slow, and which ways exist to make a dynamically grow-able array which is faster. Since inside Compile many tricks can simply not be used, which is for instance the case for Sow and Reap , this is a good alternative. A fast, compiled version of AppendTo : For a comparison I will use AppendTo directly for an easy loop. Ignore the fact that this would not be necessary here, since we know the number of elements in the result list. In a real application, you maybe wouldn't know this. appendTo = Compile[{{n, _Integer, 0}}, Module[{i, list = Most[{0}]}, For[i = 1, i <= n, ++i, AppendTo[list, i]; ]; list ] ] Using Internal`Bag is not as expensive, since in the above code, the list is copied in each iteration. This is not the case for Internal`Bag . stuffBag = Compile[{{n, _Integer, 0}}, Module[{i, list = Internal`Bag[Most[{0}]]}, For[i = 1, i <= n, ++i, Internal`StuffBag[list, i]; ]; Internal`BagPart[list, All] ] ] Comparing the run time of both functions uncovers the potential of Internal`Bag : First[AbsoluteTiming[#[10^5]]] & /@ {appendTo, stuffBag} (* {4.298237, 0.003207} *) Usage and features The following information was collected from different sources. Here is an article from Daniel Lichtblau who was kind enough to give some insider information. A question on MathGroup led to a conversation with Oleksandr Rasputinov who knew about the third argument of Internal`BagPart . Various other posts on StackOverflow exist which I will not mention explicitly. I will restrict the following to the usage of Internal`Bag and Compile together . While we have 4 functions ( Internal`Bag , Internal`StuffBag , Internal`BagPart , Internal`BagLength ), only the first three can be compiled. Therefore, one has to explicitly count the elements which are inserted into the bag if needed (or use Length on All elements). Internal`Bag[] creates an empty bag of type real. When an Integer is inserted it is converted to Real . True is converted to 1.0 and False to 0.0 . Other types of bags are possible too. See below. Internal`StuffBag[b, elm] adds an element elm to the bag b . It is possible to create a bag of bags inside compile. This way it is easy to create a tensor of arbitrary rank. Internal`BagPart[b,i] gives the i -th part of the bag b . Internal`BagPart[b,All] returns a list of all. The Span operator ;; can be used too. Internal`BagPart can have a third argument which is the used Head for the returned expression. Variables of Internal`Bag (or general inside Compile ) require a hint to the compile for deducing the type. A bag of integers can be declared as list = Internal`Bag[Most[{0}]] To my knowledge supported number-types contain Integer , Real and Complex . Examples The important property of the following examples is that they are completely compiled. There is no call to the kernel, and using the Internal`Bag in such a way should most likely speed things up. The famous sum of Gauss; adding the numbers from 1 to 100. Note that the numbers are not explicitly added. I use the third argument to replace the List head with Plus . The only possible heads inside Compile are Plus and Times and List . sumToN = Compile[{{n, _Integer, 0}}, Module[{i, list = Internal`Bag[Most[{0}]]}, For[i = 1, i <= n, ++i, Internal`StuffBag[list, i]; ]; Internal`BagPart[list, All, Plus] ] ]; sumToN[100] Creating a rank-2 tensor by creating the inner bag directly inside the constructor of the outer one: tensor2 = Compile[{{n, _Integer, 0}, {m, _Integer, 0}}, Module[{list = Internal`Bag[Most[{1}]], i, j}, Table[ Internal`StuffBag[ list, Internal`Bag[Table[j, {j, m}]] ], {i, n}]; Table[Internal`BagPart[Internal`BagPart[list, i], All], {i, n}] ] ] An equivalent function which inserts every number separately tensor2 = Compile[{{n, _Integer, 0}, {m, _Integer, 0}}, Module[{ list = Internal`Bag[Most[{1}]], elm = Internal`Bag[Most[{1}]], i, j }, Table[ elm = Internal`Bag[Most[{1}]]; Table[Internal`StuffBag[elm, j], {j, m}]; Internal`StuffBag[list, elm], {i, n}]; Table[Internal`BagPart[Internal`BagPart[list, i], All], {i, n}] ] ] A Position for integer matrices: position = Compile[{{mat, _Integer, 2}, {elm, _Integer, 0}}, Module[{result = Internal`Bag[Most[{0}]], i, j}, Table[ If[mat[[i, j]] === elm, Internal`StuffBag[result, Internal`Bag[{i, j}]] ], {i, Length[mat]}, {j, Length[First[mat]]}]; Table[ Internal`BagPart[pos, {1, 2}], {pos, Internal`BagPart[result, All]}] ], CompilationTarget -> "C", RuntimeOptions -> "Speed" ] This last example can easily be used to measure some timings against the kernel function: times = Table[ Block[{data = RandomInteger[{0, 1}, {n, n}]}, Transpose[{ {n, n}, Sqrt[First[AbsoluteTiming[#[data, 1]]] & /@ {position, Position}] }] ], {n, 100, 1000, 200}]; ListLinePlot[Transpose[times]] Open Questions Are there simpler/other ways to tell the compiler the type of a local variable? What bothers me here is that this is not really explained in the docs. It is only mentioned shortly how to define (not declare ) a tensor. When a user wants to have an empty tensor, it is completely unintuitive that he has to use a trick like Most[{1}] . Declaring variables would be one of the first things I need, when I would be new to Compile . In this tutorial , I didn't find any hint to this. Are there further features of Bag which may be important to know in combination with Compile ? The timing function of position above leaks memory. After the run {n, 100, 3000, 200} there is 20GB of memory occupied. I haven't investigated this issue really deeply, but when I don't return the list of positions, the memory seems OK. Actually, the memory for the returned positions should be collected after the Block finishes. My system here is Ubuntu 10.04 and Mathematica 8.0.4.
I am somewhat reluctant to offer this as an answer since it is inherently difficult to comprehensively address questions on undocumented functionality. Nonetheless, the following observations do constitute partial answers to points raised in the question and are likely to be of value to anyone trying to write practical compiled code using Bag s. However, caution is always highly advisable when using undocumented functions in a new way, and this is no less true for Bag s. The type of Bag s As far as the Mathematica virtual machine is concerned, Bag s are a numeric type, occupying a scalar Integer , Real , or Complex register, and can contain only scalars or other Bag s. They can be created empty, using the trick described in the question, or pre-stuffed: with a scalar, using Internal`Bag[val] (where val is a scalar of the desired type) with several scalars, using Internal`Bag[tens, lvl] , where tens is a full-rank tensor of the desired numeric type and lvl is a level specification analogous to the second argument of Flatten . For compiled code, lvl $\ge$ ArrayDepth[tens] , as Bag s cannot directly contain tensors. Internal`StuffBag can only be used to insert values of the same type as the register the Bag occupies, a type castable to that type without loss of information (e.g. Integer to Real , or Real to Complex ), or another Bag . Tensors can be inserted after being flattened appropriately using the third argument of StuffBag , which behaves in the same way as the second argument of Bag as described above. Attempts to stuff other items (e.g. un-flattened tensors or values of non-castable types) into a Bag will compile into MainEvaluate calls; however, sharing Bag s between the Mathematica interpreter and virtual machine has not been fully implemented as of Mathematica 8, so these calls will not work as expected. As this is relatively easy to do by mistake and there will not necessarily be any indication that it has happened, it is important to check that the compiled bytecode is free of such calls. Example: cf = Compile[{}, Module[{b = Internal`Bag[{1, 2, 3}, 1]}, Internal`StuffBag[b, {{4, 5, 6}, {7, 8, 9}}, 2]; Internal`BagPart[b, All] ] ] cf[] gives: {1, 2, 3, 4, 5, 6, 7, 8, 9} Nested Bag s These are created simply by stuffing one Bag into another, and do not have any special type associated with them except the types of the registers containing the pieces. In particular, there is no "nested Bag type". Per the casting rules given above, it is theoretically possible to stuff Integer Bag s into a Real Bag and later extract them into Integer registers (for example). However, this technique is not to be recommended as the result depends on the virtual machine version; for instance, the following code is compiled into identical bytecode in versions 5.2, 7, and 8, but gives different results: cf2 = Compile[{}, Module[{ br = Internal`Bag@Most[{0.}], parts = Most[{0.}], bi = Internal`Bag@Most[{0}] }, Internal`StuffBag[bi, Range[10], 1]; Internal`StuffBag[br, bi]; parts = Internal`BagPart[br, All]; Internal`BagPart[First[parts], All] ] ] The result from versions 5.2 and 7: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} The result from version 8: {1.} Stuffing Bag s of mixed Real and Integer types into a Real Bag produces even less useful results, since pointer casts are performed by Internal`BagPart without regard to the original type of each constituent Bag , resulting in corrupted numerical values. However, nesting bags works correctly in all versions provided that the inner and outer bags are of identical types. It is also possible to stuff a bag into itself to create a circular reference, although the practical value of this is probably quite limited. Miscellaneous Calling Internal`BagPart with a part specification other than All will crash Mathematica kernels prior to version 8. Internal`Bag accepts a third argument, which should be a positive machine integer. The purpose of this argument is not clear, but in any case it cannot be used in compiled code.
{ "source": [ "https://mathematica.stackexchange.com/questions/845", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/187/" ] }
850
Is there some way to do this other than going to Evaluation -> Quit kernel and firing a new one up?
Maybe this ? ClearAll["Global`*"]
{ "source": [ "https://mathematica.stackexchange.com/questions/850", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/82/" ] }
853
I often correct homework by checking the calculations in Mathematica. Sometimes you would like to have two solutions open at once. However often defined symbols such as f or phi regularly overlap between the two notebooks. Is there a way to separate them other than using different symbols in every notebook? Is it possible to separate some variables yet share others between notebooks?
May be this, I have not tried it, but it sounds like this is what you are looking for (if I understood you correctly): Evaluation menu -> Notebook's Default Context -> Unique to This Notebook. So, you do the above for each notebook. I found this in the daily Mathematica tip webpage: http://twitter.com/mathematicatip Update If you want to do it programatically from within a notebook, run SetOptions[EvaluationNotebook[], CellContext -> Notebook] . Update 2 To set this automatically for all new notebooks, open the Options Inspector ( Ctrl/Command + Shift + O ), and change the scope to "Global Preferences." Then, the option CellContext is found under Cell Options -> Evaluation Options . Change it to "Notebook."
{ "source": [ "https://mathematica.stackexchange.com/questions/853", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/82/" ] }
866
What is the proposed approach if one wants to simultaneously fit multiple functions to multiple datasets with shared parameters? As an example consider the following case: We have to measurements of Gaussian line profiles and we would like to fit a Gaussian to each of them but we expect them to be at the same line center, i.e. the fitting should use the same line center for both Gaussians. The solution I came up with looks a little clumsy. Any ideas on how to do this better, especially in cases where we have more than 2 datasets and more than one shared parameter? Example: f[x_, amplitude_, centroid_, sigma_] := amplitude Exp[-((x - centroid)^2/sigma^2)] data1 = Table[{x, RandomReal[{-.1, .1}] + f[x, 1, 1, 1]}, {x, -4, 6, 0.25}]; data2 = Table[{x, RandomReal[{-.1, .1}] + f[x, .5, 1, 2]}, {x, -8, 10, 0.5}]; gauss1 = NonlinearModelFit[data1, f[x, a1, x1, b1], {a1, x1, b1}, x, MaxIterations -> 1000, Method -> NMinimize]; gauss2 = NonlinearModelFit[data2, Evaluate[f[x, a2, x1, b2] /. gauss1["BestFitParameters"]], {a2, b2}, x, MaxIterations -> 1000, Method -> NMinimize]; Join[gauss1["BestFitParameters"],gauss2["BestFitParameters"]] datpl = ListPlot[{data1, data2}, Joined -> True, PlotRange -> {{-10, 10}, All}, Frame -> True, PlotStyle -> {Black, Red}, Axes -> False, InterpolationOrder -> 0]; Show[{datpl, Plot[{Evaluate[f[x, a1, x1, b1] /. gauss1["BestFitParameters"]], Evaluate[ f[x, a2, x1 /. gauss1["BestFitParameters"], b2] /. gauss2["BestFitParameters"]]}, {x, -10, 10}, PlotRange -> All, PlotStyle -> {Black, Red}, Frame -> True, Axes -> False]}]
This is an extension of Heike's answer to address the question of error estimates. I'll follow the book Data Analysis: A Bayesian Tutorial by D.S. Sivia and J. Skilling (Oxford University Press) . Basically, any error estimate depends on the basic assumptions you make. The previous answers implicitly assume uniform normally distributed noise: $\epsilon \sim N(0, \sigma)$. If you know $\sigma$ the error estimate is straightforward. With the same definitions: data1 = Table[{x, RandomReal[{-.1, .1}] + f[x, 1, 1, 1]}, {x, -4, 6, 0.25}]; data2 = Table[{x, RandomReal[{-.1, .1}] + f[x, .5, 1, 2]}, {x, -8, 10, 0.5}]; f[x_, amplitude_, centroid_, sigma_] := amplitude Exp[-((x - centroid)^2/sigma^2)] Add the variables: vars = {mu, au1, s1, au2, s2}; The variance of the error is (analytically, from the definition above): noiseVariance = Integrate [x^2, {x, -0.1, 0.1}]; The log-likelihood of the model is: logModel = -Total[ (data1[[All, 2]] - (f[#, au1, mu, s1] & /@ data1[[All, 1]]) )^2 /noiseVariance]/2 - Total[ (data2[[All, 2]] - (f[#, au2, mu, s2] & /@ data2[[All, 1]]) )^2 /noiseVariance]/2; Optimize the log-likelihood (note the change of sign leading to a maximization instead of minimization) fit = FindMaximum[logModel, vars] The fit will be the same, as the variance estimation doesn't affect the maximum, so I won't repeat it here. For the error estimates, the covariance matrix is found as minus the inverse of the hessian of the log-likelihood function, so (DA p.50): $$ \sigma_{ij}^2 = -[\nabla \nabla L]^{-1}_{ij} $$ hessianL = D[logModel {vars, 2}]; parameterStdDeviations = Sqrt[- Diagonal@Inverse@hessianL]; {vars, #1 \[PlusMinus] #2 & @@@ ({vars /. fit[[2]], parameterStdDeviations}\[Transpose]) }\[Transpose] // TableForm If $\sigma$ is unknown the analysis is slightly trickier, but the results are easily implemented. If the error is additive guassian noise of unknown variance the correct estimator is (DA p. 67): $$ s^2 = \frac{1}{N-1} \sum_{k=1}^N (data_k - f[x_k; model])^2 $$ estimatedVariance1 = Total[(data1[[All, 2]] - (f[#, au1, mu, s1] & /@ data1[[All, 1]]) )^2] / (Length@data2 - 1); estimatedVariance2 = Total[(data2[[All, 2]] - (f[#, au2, mu, s2] & /@ data2[[All, 1]]) )^2] / (Length@data2 - 1); As stated above the magnitude of the variance won't affect our point estimates in the model, so we can use the same code above, and just inject the newly estimated variance into the log-likelihood function. This seems to be equivalent to the default behaviour of NonlinearModelFit. As you seem to indicate that you are fitting spectra from a counting experiment, you might have better performance if you assume Poisson counting noise instead, then the variance for each channel is estimated as the number of counts in that channel: $$ \sigma^2_k \approx data_k $$ You might also want to consider adding a background model (a constant background is a simple extension of the above), depending on the noise level.
{ "source": [ "https://mathematica.stackexchange.com/questions/866", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/135/" ] }
900
Suppose if I have following list { {10,b,30}, {100,a,40}, {1000,b,10}, {1000,b,70}, {100,b,20}, {10,b,70} } How to find rows that have max value in 3rd column, in this case (*{{1000,b,70},{10,b,70}}*)
With: dat = {{10, b, 30}, {100, a, 40}, {1000, b, 10}, {1000, b, 70}, {100, b, 20}, {10, b, 70}}; Perhaps most directly: Cases[dat, {_, _, Max@dat[[All, 3]]}] More approaches: Last @ SplitBy[SortBy[dat, {#[[3]] &}], #[[3]] &] Pick[dat, #, Max@#] &@dat[[All, 3]] Reap[Fold[(If[#2[[3]] >= #, Sow@#2]; #2[[3]]) &, dat]][[2, 1]] Of these Pick appears to be concise and efficient, so it is my recommendation. Edit: Position and Extract are three times as efficient as Pick on some data. Using Transpose is slightly more efficient on packed rectangular data. dat ~Extract~ Position[#, Max@#] & @ dat[[All, 3]] dat ~Extract~ Position[#, Max@#] & @ Part[dat\[Transpose], 3] Here are some timings performed in version 7: SetAttributes[timeAvg, HoldFirst] timeAvg[func_] := Do[If[# > 0.3, Return[#/5^i]] & @@ Timing@Do[func, {5^i}], {i, 0, 15}] SeedRandom[1] dat = RandomInteger[99999, {500000, 3}]; Cases[dat, {_, _, Max@dat[[All, 3]]}] // timeAvg Last@SplitBy[SortBy[dat, {#[[3]] &}], #[[3]] &] // timeAvg Pick[dat, #, Max@#] &@dat[[All, 3]] // timeAvg Reap[Fold[(If[#2[[3]] >= #, Sow@#2]; #2[[3]]) &, dat]][[2, 1]] // timeAvg dat ~Extract~ Position[#, Max@#] &@dat[[All, 3]] // timeAvg dat ~Extract~ Position[#, Max@#] &@Part[dat\[Transpose], 3] // timeAvg 0.1278 0.764 0.0904 0.904 0.02996 0.02496 (In actuality I restarted the Kernel between each individual timing line as otherwise each run gets slower, unfairly biasing the test toward the earlier lines.) These can be further optimized by using faster position functions for numeric data. Michael E2 recommended compiling (probably faster in versions after 7): pos = Compile[{{list, _Real, 1}, {pat, _Real}}, Position[list, pat]]; dat ~Extract~ pos[#, Max@#] & @ Part[dat\[Transpose], 3] // timeAvg 0.01372 My favorite method is SparseArray properties: spos = SparseArray[Unitize[#], Automatic, 1]["AdjacencyLists"] &; dat[[spos[# - Max@#]]] & @ Part[dat\[Transpose], 3] // timeAvg 0.002872 This is now about 30X faster than Pick , my original recommendation.
{ "source": [ "https://mathematica.stackexchange.com/questions/900", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/99/" ] }
907
What are some complete examples of what one would include in a FrontEnd init.m that would make use of FrontEnd`AddMenuCommands to add commands to the Mathematica system menus (without having to directly edit MenuSetup.tr ? For example, to do the following: Add an item to the Help menu that brings up the DocumentationCenter window for Installed AddOns. Add an item to the Format > Background Color sub-menu that would add a new color? Yes, I've already consulted MathGroup threads going back to 2009, and earlier, on modifying the startup menu by editing MenuSetup.tr and about evaluating, during a session, an expression to do a FrontEndExecute . But those messages don't seem to give a complete example of what I'm asking here.
I've never seen that command before but this does work at least in version 7 on Windows: FrontEndExecute[ AddMenuCommands[ "BackgroundDialog", {Delimiter, Item["L&inen", Background -> RGBColor[0.980, 0.941, 0.902]], Item["A&liceBlue", Background -> RGBColor[0.941, 0.973, 1.0]], Item["Min&tCream", Background -> RGBColor[0.961, 1.0, 0.980]], Item["Lig&htYellow", Background -> RGBColor[1.0, 1.0, 0.878]], Item["Smok&e", Background -> RGBColor[0.961, 0.961, 0.961]], Item["Mi&styRose", Background -> RGBColor[1.0, 0.894, 0.882]]}]] That comes from this MathGroup post . (Now improved; see comments.) Further reading tells us: You can completely reset the menus using... FrontEndExecute[FrontEnd`ResetMenusPacket[{Automatic}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/907", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/148/" ] }
916
In Python, there is a function all which returns true if all of its arguments are true, and any which returns true if at least one of its arguments is true. I find these quite useful in functional programming. Given how much Mathematica encourages functional programming constructs, I was surprised not to find anything equivalent to these all and any functions in the documentation. Does Mathematica have equivalents to these functions, or some standard way to achieve the same effect? I can implement them myself as AllOf[b_List] := Fold[And, True, b][[1]] AllOf[b__] := Fold[And, True, {b}][[1]] AnyOf[b_List] := Fold[Or, True, b][[1]] AnyOf[b__] := Fold[Or, True, {b}][[1]] (these may not be always correct, but they've worked for my purposes). But I would rather use something built-in if it exists.
Both, And and Or should work for All and Any respectively. You may have to get creative in how you apply them, though. For instance, And @@ {True, False, True} works just like you would expect AllOf @ {True, False, True} to without any additional work. Similarly, Or @@ {False, True, False} works like AnyOf .
{ "source": [ "https://mathematica.stackexchange.com/questions/916", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/79/" ] }
938
What's the best way to make a drop shadow for a 3D object? image = Graphics3D[Sphere[], Boxed -> False] I can get a blurry black outline of this: imageShadow = Blur[RegionBinarize[ColorNegate[image], (* bottom left corner --> *) {{1, 1}}, 0.1], 20] which could act as a good shadow: But combining them together is a bit harder... Any suggestions?
This produces a 2D shadow. If you meant a 3D shadow (on the x-y plane), see code below. image = Rasterize[Graphics3D[Sphere[], Boxed -> False]]; shadow = Blur[RegionBinarize[ColorNegate[image], {{1, 1}}, 0.1], 20]; image = SetAlphaChannel[image, ColorNegate@Binarize[image, {1, 1}]]; Show[{shadow, image}] The position of the shadow has to be fine tuned manually. I also managed to construct it in 3D (rotatable), though I cannot make the bottom polygon transparent. shadow = Blur[ RegionBinarize[Graphics[Circle[], ImagePadding -> 60], {{1, 1}}, 0.1], 40]; shadow = SetAlphaChannel[shadow, ColorNegate@shadow]; Graphics3D[{ Sphere[], EdgeForm@None, [email protected], Texture@shadow, Polygon[{{-1, -1, -2}, {1, -1, -2}, {1, 1, -2}, {-1, 1, -2}, {-1, -1, -2}}, VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}] }, Boxed -> False]
{ "source": [ "https://mathematica.stackexchange.com/questions/938", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/61/" ] }
941
I have a list and I want to find (in this particular case the first) appearance of a any of some subsequences, of possible different lengths. None of the subsequences is a subsequence of each other. In my particular case I could do this translating the list to a string and using StringPosition . But I could do it because all elements on my list were one-character-long. Before realizing this I had implemented a not-nearly-one-liner that did the trick without recurring to Strings. It didn't do any useless comparison but it did lots of useless coping of the list as a whole, and it turned out to be 50 times slower than the StringPosition version. It can be improved, avoiding that issue, making it even less one-liner. The task just seems too easy to describe so as to be so not-easy to program well... Is there an efficient way to do it for the general case? "Find the first appearance of one of many subsequences (possible different lengths, perhaps could be patterns, or not) in a list" (Wow, I think I just thought of a good way, I'll give it a shot... If it works I'll auto-answer. But I'd still like your input, I'm afraid I'm missing some options)
I asked the same question on StackOverflow recently , and the answer that is now my favourite came from Jan Pöschko (modified): findSubsequence[list_, {ss__}] := ReplaceList[list, {pre___, ss, ___} :> Length[{pre}] + 1] This will find all positions of ss in list . Example: findSubsequence[Range[50] ~Mod~ 17, {4, 5, 6}] {4, 21, 38} Despite using patterns, this solution runs very quickly, even for packed arrays. Please see the question I linked to for more possibilities. A potentially useful generalization to other heads may be had with: findSubsequence[list : h_[__], _[ss__]] := ReplaceList[list, h[pre___, ss, ___] :> Length[{pre}] + 1] Allowing such forms as: x = Hold[1 + 1, 2 + 1, 3 + 1, 4 + 1, 2 + 1, 3 + 1, 1 + 1, 2 + 1, 3 + 1]; findSubsequence[x, Hold[2 + 1, 3 + 1]] {2, 5, 8}
{ "source": [ "https://mathematica.stackexchange.com/questions/941", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/109/" ] }
970
Is there a sprintf() command (some command that takes a printf -style format string and a list of values to insert into the string) or something very much like it (preferably with a similar style of format specifiers)? Or, alternately, how would I implement sprintf() in Mathematica?
I've had a need for such a function several times, and I found this implementation of C-style *printf functions , by Vlad Seghete . To use it, all you need to do is extract the files to $UserBaseDirectory/MathPrintF/ and you're all set. Here's an example once you've installed it: <<MathPrintF` sprintf["%d %s %d %s, %s %s %s %s", Sequence @@ Riffle[{1, 2, "red", "blue"}, {"fish"}, {2, -1, 2}]] Out[1]= 1 fish 2 fish, red fish blue fish Also note the following caveat in the README Limited Functionality While we tried to mimic the C-standard as much as possible, only certain features are implemented. These are mainly dictated by what we needed at the time. In particular %d, %f, %e, %E and %s with most of their options are implemented.
{ "source": [ "https://mathematica.stackexchange.com/questions/970", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7/" ] }
990
I really miss having something like a struct in Mathematica. I know of (and regularly use) a couple of programming techniques which feel like a struct (e.g., using downvalues ), but are ultimately unsatisfactory (perhaps I'm using downvalues incorrectly). What programming approaches are available which provide similar functionality to a struct ? Here's an abbreviated (and hopefully not too obtuse) example of how I use downvalues to emulate a struct. In this case, I'm distinguishing between TLC and TEC (these are sets of parameters for two different phases of a Moon mission, trans-lunar cruise and trans-earth cruise): deadBandWidth[X][TLC] ^= 10. °; deadBandWidth[Y][TLC] ^= 10. °; deadBandWidth[Z][TLC] ^= 20. °; sunSearchAngle[Z][TLC] ^= 230. °; sunSearchRate[Z][TLC] ^= 1. °/Second; sunSearchAngle[X][TLC] ^= 75. °; sunSearchRate[X][TLC] ^= 1. °/Second; safingSpinRate[TLC] ^= (360. °)/Day; sunVector[TLC] ^= {-Cos[45. °], 0., Sin[45. °]}; safingSpinAxis[TLC] ^= sunVector[TLC]; deadBandWidth[X][TEC] ^= 20. °; deadBandWidth[Y][TEC] ^= 20. °; deadBandWidth[Z][TEC] ^= 20. °; sunSearchAngle[Z][TEC] ^= 230. °; sunSearchRate[Z][TEC] ^= 1. °/Second; sunSearchAngle[X][TEC] ^= 75. °; sunSearchRate[X][TEC] ^= 1. °/Second; safingSpinRate[TEC] ^= (360. °)/Hour; sunVector[TEC] ^= {0., 0., +1.}; safingSpinAxis[TEC] ^= sunVector[TEC]; ?TLC Global`TLC safingSpinAxis[TLC]^={-0.707107,0.,0.707107} safingSpinRate[TLC]^=6.28319/Day sunVector[TLC]^={-0.707107,0.,0.707107} deadBandWidth[X][TLC]^=0.174533 deadBandWidth[Y][TLC]^=0.174533 deadBandWidth[Z][TLC]^=0.349066 sunSearchAngle[X][TLC]^=1.309 sunSearchAngle[Z][TLC]^=4.01426 sunSearchRate[X][TLC]^=0.0174533/Second sunSearchRate[Z][TLC]^=0.0174533/Second
Update: Mathematica 10 has introduced Association , which can be used as a close equivalent of struct s. params = <| "par1" -> 1, "par2" -> 2 |> params["par1"] (* ==> 1 *) In version 10 pure functions can have named arguments, and can be effectively used as expression templates where the slots can be populated from an association. This is similar to the technique I describe in the original version of this post (below the line). #par1 + #par2 & [params] will evaluate to 1 + 2 then to 3 . That said, my personal workflow still fits better with the approach described below the line ( withRules ). The reason for this is that I tend to build up calculations interactively and incrementally . This means that I do not start by writing the equivalent of an expression template (which would require thinking ahead...). Instead I start with all the values explicitly written out, and later I replace them with a global variable. This global variable can be simply Unset , and given a local value using withRules , then eventually changed into a function argument. Quoting the OP's comment: Most of the work I do involves constructing mathematical models and then testing various scenarios against those models. I'd like to be able to populate a particular scenario and then pass that scenario to a model. I'd also like to be able to copy that scenario, modify one or more parameters, and then pass the new scenario to the model. The requirement, as I understand, is to be able to pass many parameter values around in a structured way. Lists of rules are convenient for this: params = {par1 -> 1, par2 -> 2, par3 -> {x,y,z}} They can be extracted like this: par1 /. params (* ==> 1 *) Once I wrote a function for substituting such parameter lists into bigger pieces of code: ClearAll[withRules] SetAttributes[withRules, HoldAll] withRules[rules_, expr_] := First@PreemptProtect@Internal`InheritedBlock[ {Rule, RuleDelayed}, SetAttributes[{Rule, RuleDelayed}, HoldFirst]; Hold[expr] /. rules ] It can be used like this: withRules[params, par1 + par2 ] (* ==> 3 *) withRules can contain complex code inside, and all occurrences of par1 , par2 , etc. will be substituted with the values from the parameter list. We can also write a function for easily modifying only a single parameter (from the whole list), and returning a new parameter list. Here's a simple implementation: setParam[paramList_, newRules_] := DeleteDuplicates[Join[newRules, paramList], First[#1] === First[#2] &] Example usage: setParam[params, {par2 -> 10}] (* ==> {par2 -> 10, par1 -> 1, par3 -> {x, y, z}} *) Another list which has a different value for par2 is returned. If needed, this could be extended to support more complex, structured lists such as { par1 -> 1, group1 -> {par2x -> 10, par2y -> 20}} , much how like the built-in option-handling works . Addendum by celtschk: It's possible to extract a value from a list of rules using OptionValue as well: OptionValue[params, par1] .
{ "source": [ "https://mathematica.stackexchange.com/questions/990", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27/" ] }
1,004
I tried Subscript[a, 0] = 1 (* 1 *) Clear[Subscript[a, 0]] During evaluation of Clear::ssym: Subscript[a, 0] is not a symbol or a string. >> Clear[a] Subscript[a, 0] (* 1 *) Any idea?
Yes you can, with limitations. You have at least three different ways to make an assignment to a subscripted symbol a 0 : make a rule for Subscript make a rule for a "symbolize" a 0 using the Notation package/palette In each case below, when I write e.g. Subscript[a, 1] this can also be entered as a 1 by typing a then Ctrl + _ then 1 . When you write: Subscript[a, 1] = "dog"; You make an assignment to Subscript : DownValues[Subscript] {HoldPattern[a 1 ] :> "dog"} You make a rule for a by using TagSet : a /: Subscript[a, 2] = "cat"; UpValues[a] {HoldPattern[a 2 ] :> "cat"} If you use the Notation palette you mess with underlying Box forms behind the scenes, allowing for assignment to OwnValues : Each of these can be cleared with either Unset or TagUnset : Subscript[a, 1] =. a /: Subscript[a, 2] =.
{ "source": [ "https://mathematica.stackexchange.com/questions/1004", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/333/" ] }
1,008
It is not very difficult to face a function for which ContourPlot works too slow. And it seems natural that this function can be parallelized well. Anyway, naive Parallelize@ContourPlot produces " ContourPlot[...] cannot be parallelized; "... So, is it possible to parallelize ContourPlot?
I second @Verbeia's suggestion: compute the function on a mesh of points and use ListContourPlot . The disadvantage is that ListContourPlot has no adaptive sampling, so it'd be preferable if we could do our own adaptive sampling somehow. Adaptive sampling can give you a much better result while needing to compute the function in far less points---and the problem here is indeed computation time. So ContourPlot with its adaptive sampling might give a better result in less time on a single CPU than ListContourPlot will with a high resolution mesh computed on many CPUs. Adaptive sampling is what I asked about (and solved) here: Adaptive sampling for slow to compute functions in 2D The method I implemented there is usable (I am using it for something very similar to what you describe) but it is not nearly as good as ContourPlot 's own. So one might still try to somehow make use of it. I'm quoting one suggestion I received from Leonid Shifrin there (in a comment): You probably can control the DensityPlot , although not directly. Since it calls your function, you can simply Sow the values until some criteria (which you define) is violated (or satisfied). Then, you stop via throwing an exception , and catching it in the outer function, but still inside Reap . Alternatively, you could just start fooling DensityPlot by supplying faked values (perhaps, interpolated, or whatever), and it will stop by itself, I guess. Not sure this will work for you, but it may be worth trying. I have not tried to implement this before, but I think it could work if your function is sufficiently smooth (which mine is definitely not, but yours may be). Here's a quick sample implementation of how it could work: First, let's define a sample function to plot: fun[{x_, y_}] := 1/(1 + Exp[10 (Norm[{x, y}] - 3)]) Let's divide both the $x$ and $y$ axes into 5 parts on the interval $[0,5]$ and generate a mesh of points: initialDivision = Range[0, 5]; points = N@Tuples[initialDivision, {2}]; Calculate function values on the intial mesh. This can be parallelized (just use ParallelMap ) values = fun /@ points; This counter i will be used to control the maximal subdivisions in ContourPlot : i = 0; Now put the following code into a single cell, and evaluate it several times. Each time a finer and finer approximation will be computed. The points where function values have been computed will also be visualized. Note that I fixed the plot points in ContourPlot to force it to use the same initial mesh that I used, and I also fixed the number of contours. if = Interpolation@ArrayFlatten[{{points, List /@ values}}] {plot, {newpoints}} = Reap[ ContourPlot[if[x, y], {x, 0, 5}, {y, 0, 5}, Contours -> Range[0, 1, .1], MaxRecursion -> (++i), PlotPoints -> Length[initialDivision], EvaluationMonitor :> Sow[{x, y}]] ]; plot newpoints = Complement[newpoints, points]; newvalues = fun /@ newpoints; (* <-- this can be parallelized *) points = Join[points, newpoints]; values = Join[values, newvalues]; Graphics[Point[points]] After a few iterations the contour plot and the point mesh will look like this (note that the code above only plots the contours for the previous step, not the current results): After 3 iterations, this method has computed the function value in 3809 points for this particular function. Let's compare this with a plain ContourPlot using the same parameters: ContourPlot[fun[{x, y}], {x, 0, 5}, {y, 0, 5}, PlotPoints -> 6, MaxRecursion -> 3] The quality of the plot is about the same with a plain ContourPlot as well. How many points did the plain CountoutPlot use? Reap[ContourPlot[fun[{x, y}], {x, 0, 5}, {y, 0, 5}, PlotPoints -> 6, MaxRecursion -> 3, EvaluationMonitor :> Sow[{x, y}]]][[2, 1]] // Length (* ==> 3790 *) It uses almost the same number of points, so if the bottleneck is computing f , the method I described is going to be almost as fast as ContourPlot on a single core, with the advantage that it is parallelizable for multiple cores. The next step would be packaging this up into a self-contained function, but seeing how the quality improves step by step is also valuable as you can make decisions about when to stop calculating (and avoid excessive computation times). I find it quite disappointing that all those nice and fast algorithms that plotting functions use (fast Voronoi cells, Delaunay trinagulation, adaptive sampling) are not directly accessible by users. We either have to use hacks to access these algorithms or reimplement them.
{ "source": [ "https://mathematica.stackexchange.com/questions/1008", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/219/" ] }
1,088
If I want to count the number of zeros at the (right) end of a large number, like $12345!$, I can use something like: Length[Last[Split[IntegerDigits[12345!]]]] But this seems clumsy, since it's potentially doing the full work of Split[] to the whole list of digits when all I need is the length of the run of $0$s at the end of the list. Is there a more efficient (and particularly more Mathematica-elegant) way to do this? (The answer for the example should be 3082.)
For general large integers n , I don't know if there's a better method than Min[IntegerExponent[n, 5], IntegerExponent[n, 2]] . Or more compactly, IntegerExponent[n, 10] or IntegerExponent[n] .
{ "source": [ "https://mathematica.stackexchange.com/questions/1088", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7/" ] }
1,096
Is there somewhere a list on the functions that Compile can compile, or the cases in which a particular function can be compiled that I haven't found? I'd be glad even with a list of some of them which surprisingly aren't compilable, and how to do without them. I am not happy every time I have to rewrite or redesign code because it seems to make external calls for functions I didn't expect. I'd like to know how you handle all that, what you keep in mind.
Yes, but this only exists in version 8 onwards and is undocumented: Compile`CompilerFunctions[] // Sort giving, for reference: {Abs, AddTo, And, Append, AppendTo, Apply, ArcCos, ArcCosh, ArcCot, ArcCoth, ArcCsc, ArcCsch, ArcSec, ArcSech, ArcSin, ArcSinh, ArcTan, ArcTanh, Arg, Array, ArrayDepth, Internal`Bag, Internal`BagPart, BitAnd, BitNot, BitOr, BitXor, Block, BlockRandom, Boole, Break, Cases, Catch, Ceiling, Chop, Internal`CompileError, System`Private`CompileSymbol, Complement, ComposeList, CompoundExpression, Conjugate, ConjugateTranspose, Continue, Cos, Cosh, Cot, Coth, Count, Csc, Csch, Decrement, Delete, DeleteCases, Dimensions, Divide, DivideBy, Do, Dot, Drop, Equal, Erf, Erfc, EvenQ, Exp, Fibonacci, First, FixedPoint, FixedPointList, Flatten, NDSolve`FEM`FlattenAll, Floor, Fold, FoldList, For, FractionalPart, FreeQ, Compile`GetElement, Goto, Greater, GreaterEqual, Gudermannian, Haversine, If, Im, Implies, Increment, Inequality, Compile`InnerDo, Insert, IntegerDigits, IntegerPart, Intersection, InverseGudermannian, InverseHaversine, Compile`IteratorCount, Join, Label, Last, Length, Less, LessEqual, List, Log, Log10, Log2, LucasL, Map, MapAll, MapAt, MapIndexed, MapThread, NDSolve`FEM`MapThreadDot, MatrixQ, Max, MemberQ, Min, Minus, Mod, Compile`Mod1, Module, Most, N, Negative, Nest, NestList, NonNegative, Not, OddQ, Or, OrderedQ, Out, Outer, Part, Partition, Piecewise, Plus, Position, Positive, Power, PreDecrement, PreIncrement, Prepend, PrependTo, Product, Quotient, Random, RandomChoice, RandomComplex, RandomInteger, RandomReal, RandomSample, RandomVariate, Range, Re, ReplacePart, Rest, Return, Reverse, RotateLeft, RotateRight, Round, RuleCondition, SameQ, Scan, Sec, Sech, SeedRandom, Select, Set, SetDelayed, Compile`SetIterate, Sign, Sin, Sinc, Sinh, Sort, Sqrt, Internal`Square, Internal`StuffBag, Subtract, SubtractFrom, Sum, Switch, Table, Take, Tan, Tanh, TensorRank, Throw, Times, TimesBy, Tr, Transpose, Unequal, Union, Unitize, UnitStep, UnsameQ, VectorQ, Which, While, With, Xor} As of Mathematica 10.0.2, there are also the following functions: {Gamma, Indexed, LogGamma, LogisticSigmoid, Internal`ReciprocalSqrt} As of Mathematica 11, there are also the following functions: {Internal`Expm1, Internal`Log1p, Ramp} As of Mathematica 11.2, there are also the following functions: {RealAbs, RealSign} About Tr : Please note that Tr appears in this list, but cannot actually be compiled without a call to MainEvaluate[] . It is unclear if this is deliberate or a bug . Edit: additional functions I have just discovered the symbol Internal`CompileValues , which provides various definitions and function calls needed to compile further functions not in the list above. Using the following code, Internal`CompileValues[]; (* to trigger auto-load *) ClearAttributes[Internal`CompileValues, ReadProtected]; syms = DownValues[Internal`CompileValues] /. HoldPattern[Verbatim[HoldPattern][Internal`CompileValues[sym_]] :> _] :> sym; Complement[syms, Compile`CompilerFunctions[]] we get some more compilable functions as follows: {Accumulate, ConstantArray, Cross, Depth, Det, DiagonalMatrix, Differences, NDSolve`FEM`FEMDot, NDSolve`FEM`FEMHold, NDSolve`FEM`FEMInverse, NDSolve`FEM`FEMPart, NDSolve`FEM`FEMTDot, NDSolve`FEM`FEMTotalTimes, NDSolve`FEM`FEMZeroMatrix, FromDigits, Identity, IdentityMatrix, Inverse, LinearSolve, Mean, Median, Nand, NestWhile, NestWhileList, Nor, Norm, Ordering, PadLeft, PadRight, Permutations, Ratios, Signature, SquareWave, StandardDeviation, Tally, Total, TrueQ, Variance} Looking at the definition of Internal`CompileValues[sym] for sym in the list above will provide some additional information about how these functions are compiled. This can range from type information (for e.g. Inverse ), through to an implementation in terms of lower-level functions (e.g. NestWhileList ). One can presumably also make one's own implementations of non-compilable functions using this mechanism, giving Compile the ability to compile a wider range of functions than it usually would be able to. As of Mathematica 10.3, there are also the following functions: {DeleteDuplicates, Region`Mesh`SmallMatrixRank, Region`Mesh`SmallQRSolve, Region`Mesh`SmallSingularValues, Region`Mesh`SmallSingularValueSystem, Region`Mesh`SmallSVDSolve, NDSolve`SwitchingVariable} As of Mathematica 11, there are also the following functions: {NearestFunction, RegionDistanceFunction, RegionMemberFunction, RegionNearestFunction} Edit 2: the meaning of the second list In response to a recent question , I want to be clear that the presence of a function in the second list given above does not necessarily mean it can be compiled into a form free of MainEvaluate calls. If a top-level function is already highly optimized (as e.g. LinearSolve is), the purpose of Internal`CompileValues[func] may be solely to provide type information on the return value, assuming that this can be inferred from the types of the arguments or some other salient information. This mechanism allows more complex functions that call these highly-optimized top-level functions to be compiled more completely since there is no longer any question of what the return type may be and so further unnecessary MainEvaluate calls may be avoided. It does not imply that the use of MainEvaluate is unnecessary to call the function itself .
{ "source": [ "https://mathematica.stackexchange.com/questions/1096", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/109/" ] }
1,124
Recently, Oleksandr kindly showed a list of Mathematica commands that can be compiled . RandomVariate was part of that list. However, whether this can be compiled depends upon the distribution that is being sampled. Needs["CompiledFunctionTools`"] cf1 = Compile[{{m, _Real}, {s, _Real}}, Module[{v1, v2, v3, v4, v5, v6}, v1 = RandomVariate[NormalDistribution[m, s]]; v2 = RandomVariate[UniformDistribution[{m, s}]]; v3 = RandomVariate[GammaDistribution[m, s]]; v4 = RandomVariate[PoissonDistribution[m]]; v5 = RandomVariate[ChiSquareDistribution[m]]; v6 = RandomVariate[ExponentialDistribution[m]]; {v1, v2, v3, v4, v5, v6} ] ] Using CompilePrint shows that RandomVariate can be compiled for the Normal Distribution or the Uniform Distribution and not with some others. CompilePrint[cf1] 2 arguments 4 Integer registers 8 Real registers 1 Tensor register Underflow checking off Overflow checking off Integer overflow checking on RuntimeAttributes -> {} R0 = A1 R1 = A2 Result = T(R1)0 1 R2 = RandomNormal[ R0, R1]] 2 R3 = RandomReal[ R0, R1]] 3 I0 = MainEvaluate[ Function[{m, s}, RandomVariate[GammaDistribution[m, s]]][ R0, R1]] 4 I1 = MainEvaluate[ Function[{m, s}, RandomVariate[PoissonDistribution[m]]][ R0, R1]] 5 I2 = MainEvaluate[ Function[{m, s}, RandomVariate[ChiSquareDistribution[m]]][ R0, R1]] 6 I3 = MainEvaluate[ Function[{m, s}, RandomVariate[ExponentialDistribution[m]]][ R0, R1]] 7 R4 = I0 8 R5 = I1 9 R6 = I2 10 R7 = I3 11 T(R1)0 ={ R2, R3, R4, R5, R6, R7 } 12 Return Does anyone have a list of all the distributions that can be compiled (including, PDF, CDF and RandomVariate functionality)?
To my knowledge UniformDistribution and NormalDistribution are the only distributions that are directly compilable for RandomVariate . Consider that sampling from a UniformDistribution is what RandomReal was originally designed to do. This code is likely written deep down in C and so compiles without any special effort. In order to hook up RandomVariate for uniforms Compile just needs to recognize that this is really just a call to RandomReal . Now, sampling from a NormalDistribution is so common that it was considered worth the time investment to make it compilable. Notice that the call to RandomVariate actually produces a call to RandomNormal which was almost certainly written for this purpose. As for other distributions, special code would need to be written for each one in a similar fashion to RandomNormal for them to be "supported" by Compile . Since there are well over 100 of these, it would be a huge undertaking. An argument could be made for doing this for a few distributions but who is to decide which ones are most important? There is a sunny side. Most distributions have their own dedicated and highly optimized methods for random number generation. Often Compile is used under the hood when machine precision numbers are requested. Because of this, even if they were directly compilable you probably wouldn't see much of a speed boost since the code is already optimized. Fortunately Compile can happily handle arrays of numbers. I typically just rely on the optimized code used by RandomVariate to generate the numbers and subsequently pass them in as an argument to the compiled function. Incidentally, everything I just said about RandomVariate is also true of distribution functions like PDF , CDF , etc. Obviously these are just pure functions (in the univariate case) and unless they are built with some exotic components they should compile assuming you evaluate them before putting them into your compiled function.
{ "source": [ "https://mathematica.stackexchange.com/questions/1124", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/210/" ] }
1,128
I have a piecewise function that I would like to plot but I was wondering if it is possible that each part of the function that is plotted when its corresponding condition is true be plotted with a different color from the other parts. That is, if I have a Piecewise function Piecewise[{{val1, cond1},{val2,cond2},{val3,cond3}}] then I want val1 , val2 , and val3 to be plotted with different colors so that I can differentiate each case in the plot.
Here's an alternative approach than Spartacus' answer. What he did is splitting up the piecewise function into many different functions valid in only a small domain; what I am doing here is directly plotting the piecewise function as given, while the coloring is done using ColorFunction . I'll use the same function as Spartacus, f = Piecewise[{{#^2, # <= 0}, {#, 0 < # <= 2}, {Log[#], 2 < #}}] & Step by step to the result Now let's create a ColorFunction that does the desired thing out of this. I'll do this using Part , i.e. double brackets [[ ]] , which is not limited to lists only. First, create a copy of f . colorFunction = f; Now we need to find out how many pieces there are in this function; for this we have to extract those into a list we can allpy Length to. Step by step: colorFunction[[1]] Piecewise[{{#1^2, #1 <= 0}, {#1, Inequality[0, Less, #1, LessEqual, 2]}, {Log[#1], 2 < #1}}, 0] That's the full function body. By applying another [[1]] , we can get the first argument of Piecewise : colorFunction[[1, 1]] {{#1^2, #1 <= 0}, {#1, 0 < #1 <= 2}, {Log[#1], 2 < #1}} From this matrix-shaped list, we'd like to get the length, leaving us with piecewiseParts = Length@colorFunction[[1,1]] Alright! Now make some colors out of that. The default plot colors are stored in ColorData[1][x] , where x=1,2,3,4... is the usual blue/magenta/yellowish/green and so on. colors = ColorData[1][#] & /@ Range@piecewiseParts {RGBColor[0.2472, 0.24, 0.6], RGBColor[0.6, 0.24, 0.442893], RGBColor[0.6, 0.547014, 0.24]} Now we need to take these color directives and inject them into the original function (that is, the colorFunction copy I've made in the beginning), so that it replaces squares and logarithms by reds and blues. This is some more Part acrobatics: colorFunction[[1, 1, All, 1]] = colors Done! colorFunction is now identical to the original function f , only that the actual functions have been replaced by colors. It looks like this: Piecewise[{{RGBColor[...], # <= 0}, {RGBColor[...], 0 < # <= 2}, {RGBColor[...], 2 < #}}] & Now it's time to plot, see the completed code below. The completed code f = Piecewise[{{#^2, # <= 0}, {#, 0 < # <= 2}, {Log[#], 2 < #}}] &; colorFunction = f; piecewiseParts = Length@colorFunction[[1, 1]]; colors = ColorData[1][#] & /@ Range@piecewiseParts; colorFunction[[1, 1, All, 1]] = colors; Plot[ f[x], {x, -2, 4}, ColorFunction -> colorFunction, ColorFunctionScaling -> False ] (The option ColorFunctionScaling determines whether Mathematica scales the domain for the color function to $[0,1]$. Handy in some cases, not so much here, since our self-made colorFunction is constant in this domain.)
{ "source": [ "https://mathematica.stackexchange.com/questions/1128", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/358/" ] }
1,135
I know that I can close a notebook and open it again in order to remove the In/Out labels. What I would like to know is if there is another easy way of removing these labels without having to close the notebook. Actually I just want to hide them from my printouts. It seems to me like it should be a printing option, but I can't find it.
You can set this in a style sheet so that it is done once and you don't have to do it again: Cell[StyleData[All, "Printout"], ShowCellLabel -> False] or can you programmatically add this private style to your notebook: SetOptions[EvaluationNotebook[], StyleDefinitions -> Notebook[{Cell[StyleData[StyleDefinitions -> "Default.nb"]], Cell[StyleData[All, "Printout"], ShowCellLabel -> False]}, StyleDefinitions -> "PrivateStylesheetFormatting.nb"] ] If you are unfamiliar with editing style sheets that latter option is probably the best.
{ "source": [ "https://mathematica.stackexchange.com/questions/1135", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/251/" ] }
1,186
A quick question, but which I don't believe has been asked here or at SO. Does Mathematica have a simple way to just download a file from the web? i.e. if I have a list of PDF links (~ 2,000), can I use Mathematica to quickly take them and save them to my system. The obvious way that I've used in the past is to Import[] the data, but since we're talking large-ish PDFs I wonder if there's a way to skip that step. I have used wget for this sort of thing in the past, but just seeing if there's an easy way to do it within Mathematica . The 'Web Operations' section of the documentation does not seem to have any obvious reference to this. If not, I will obviously just use the 'proper' tool.
How about a version of: Needs["Utilities`URLTools`"]; path = FetchURL[ "http://www-roc.inria.fr/gamma/download/counter.php?dir=MECHANICAL//&\ get_obj=ifp2_cut.mesh.gz&acces=ifp2_cut", "ifp2_cut.mesh.gz"];
{ "source": [ "https://mathematica.stackexchange.com/questions/1186", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19/" ] }
1,199
I have a function that operates on a list of parameters of variable-length $n$. I would like to create a Manipulate[] that has $n$ sliders, one for each list element, each considered a separate parameter. The syntax Manipulate[expr,{u,...},{v,...},...]] does not lend itself to this, as it is geared toward a fixed number of parameters known in advance, and referenced by separate variable names. I have started exploring preparing a list of arguments to Manipulate[] and then using Apply[] , but this seems tricky and complicated. Anyone come upon this conundrum before?
The Advanced Dynamic Functionality in Mathematica documentation has the following example that looks like what you need. DynamicModule[{n = 5, data = Table[RandomReal[], {20}]}, Column[{ Slider[Dynamic[n], {1, 20, 1}], Dynamic[Grid[Table[With[{i = i}, {Slider[Dynamic[data[[i]]]], Dynamic[data[[i]]]}], {i, n}] ]]}]] It builds a list of controllers ( Slider -s in this particular case) by using the fact that you can assign values to not just symbols but also to members of a list by doing data[[1]] = value . Which is exactly the thing that happens inside Dynamic[data[[i]]] , as it is equivalent to: Dynamic[data[[i]], (data[[i]] = #)&] telling Mathematica to whenever the actual value of data[[i]] is changed, use the new value ( # ) to update the expression data[[i]] . Also from the Documentation Center, the last example in Manipulate: Neat Examples may be useful: Manipulate[ ArrayPlot[ Take[data, h, w]], {{data, RandomInteger[{0, 1}, {10, 20}]}, ControlType -> None}, {{h, 5}, 1, 10, 1}, {{w, 5}, 1, 20, 1}, Dynamic[Panel[ Grid[Outer[Checkbox[Dynamic[data[[#1, #2]]], {0, 1}] &, Range[h], Range[w]]]]]] which gives
{ "source": [ "https://mathematica.stackexchange.com/questions/1199", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/194/" ] }
1,224
I am trying to generate & Export[] Image with the below code, but as shown in the images I get some white border around my rectangle. How could I export an images that stopes right at the edge of my Rectangle[] edges ? c0 = {RGBColor[23/85, 29/255, 142/255], RGBColor[244/255, 1, 59/255], RGBColor[1, 0, 32/85], RGBColor[18/85, 72/85, 197/255]} Export[FileNameJoin[{Directory[], "DropBox", ToString[#] <> ".jpg"}], Graphics[{EdgeForm[Thick], White, Rectangle[{0, 0}, {160, 90}], Flatten@({Flatten@(Table[ RandomChoice[{GrayLevel[.15], c0[[#]]}], {3}] & /@ Range[2, 4, 1]), MapThread[ Function[{Xs, Ys}, Rectangle[{Xs, Ys}, {Xs + 16, Ys + 9}]], {Flatten@Table[Range[0, 32, 16], {3}], Flatten@(Table[#, {3}] & /@ Range[63, 81, 9])}]}\[Transpose]), Black, Thick, Line[{{0, 63}, {160, 63}}]}, ImageSize -> 300]] & /@ Range[100]
There's another, undocumented, approach, although I can't take credit for discovering this one. The solution you're probably looking for (in the sense that Brett Champion's solution seems to clip off a little too much at the edges) is the Method option for Graphics : Method -> {"ShrinkWrap" -> True} e.g. (graphics example from the documentation for Circle ): Graphics[ Table[{Hue[t/20], Circle[{Cos[2 Pi t/20], Sin[2 Pi t/20]}, 1]}, {t, 20}], Method -> {"ShrinkWrap" -> True} ] Note that this has to be written as Method -> {"ShrinkWrap" -> True} . The form Method -> "ShrinkWrap" -> True might be expected to work, but it doesn't.
{ "source": [ "https://mathematica.stackexchange.com/questions/1224", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/172/" ] }
1,234
I am using the following code to easily generate a row of images of all eight planets of our Solar System: Labeled[# // Text, AstronomicalData[#, "Image"]] & /@ AstronomicalData["Planet"] // Row I like the above code because it is concise and gives nice output. However, if I wanted to do something similar for the moons in our Solar System, I run into the problem that only some moons have images. If I run the following line of code, I get a somewhat sloppy output with Missing Image errors embedded. Labeled[# // Text, AstronomicalData[#, "Image"]] & /@ AstronomicalData["PlanetaryMoon"] // Row I was wondering if there is a simple way to Map over a list of data conditionally? That is, if an element in a list does not meet a specific condition then skip over it. Note that I'm looking to avoid an explicit loop with an If construct. I really don't mind using a loop with an If statement, but I was curious if there was a concise idiom that anyone could shed some light on.
Here's an option which only passes moons with images to the Labeled function: Labeled[# // Text, AstronomicalData[#, "Image"]] & /@ Select[AstronomicalData["PlanetaryMoon"], ImageQ[AstronomicalData[#, "Image"]] &] // Row
{ "source": [ "https://mathematica.stackexchange.com/questions/1234", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/381/" ] }
1,259
I have a complicated function that I need multiple times, so I want to memoize it and have the first evaluation done in parallel. Unlike in my example below it's not a continuous function, so interpolation is not an option. (In fact, it's values are also functions.) The naive approach clearly does not work, because the memoized value is then only known on the kernel it was evaluated on: LaunchKernels[2]; f[x_] := f[x] = (Pause[3]; N[Sin[x]]); (*Expensive calculation*) ParallelDo[f[x], {x, 3}]; ParallelEvaluate[AbsoluteTiming[f[1]]] (* ==> {{3.000632, 0.841471}, {0.000024, 0.841471}} *) I believe I found a workaround by doing something like this: f[x_] := (Pause[3]; N[Sin[x]]); (*Expensive calculation - NO memoization*) t = ParallelTable[f[x], {x, 3}]; Do[f[x] = t[[x]], {x,3}]; Using SetSharedFunction[f] before ParallelDo also yields a non-optimal result: {{0.012051, 0.841471}, {0.012202, 0.841471}} 0.01 s is still a long time to look up a value (see above, it should be < 1 ms). Is there something more elegant or do I have to keep it like this? Edit: Just to be clear, the workaround above without Shared Functions works, it runs in parallel and the main kernel knows the values afterwards, but it strikes me as an ugly hack. I was wondering if there was an "official" solution.
The problem with SetSharedFunction is that it forces f to be evaluated on the main kernel: this means that if you simply do SetSharedFunction[f] then you will lose parallelization (a timing of ParallelTable[f[x], {x, 3}] will give about 9 seconds). This property of SetSharedFunction is not clear from the documentation in my opinion. I learned about it from this answer . It is also not clear if the behaviour is the same in version 7 (can someone test? I tested my answer on version 8 only). We can however store the memoized vales on the main kernel, while evaluating the expensive computations on the paralell kernels. Here's one way to do it: f[x_] := With[{result = g[x]}, If[result === Null, g[x] = (Pause[3]; N[Sin[x]]), result ] ] SetSharedFunction[g] Here I used the special property of shared functions that they return Null on the paralell kernels when they have no value for a given argument. The first time we run this, we get a 6 s timing, as expected: AbsoluteTiming@ParallelTable[f[x], {x, 3}] (* ==> {6.0533462, {0.841471, 0.909297, 0.14112}} *) The second time it will be very fast: AbsoluteTiming@ParallelTable[f[x], {x, 3}] (* ==> {0.0260015, {0.841471, 0.909297, 0.14112}} *) However, as you noticed, evaluating f on the parallel kernels is a bit slow. On the main kernel it's much faster. This is due to the communication overhead: every time f is evaluated or changed on a subkernel, it needs to communicate with the main kernel. The slowdown does not really matter if f is really expensive (like the 3 seconds in your toy example), but it can be significant if f is very fast to execute (comparable in time to the apparently ~10 ms communication overhead) ParallelTable[AbsoluteTiming@f[x], {x, 3}] (* ==> {{0.0100006, 0.841471}, {0.0110006, 0.909297}, {0.0110007, 0.14112}} *) Table[AbsoluteTiming@f[x], {x, 3}] (* ==> {{0., 0.841471}, {0., 0.909297}, {0., 0.14112}} *) Finally a note about benchmarking: in general, measuring very short times like the 10 ms here should be done with care. On older versions of Windows, the timer resolution is only 15 ms. On Windows 7, the resolution is much better. These timings are from Windows 7. Update: Based on @Volker's @Leonid's suggestion in the comments, and @Volker's original solution, we can combine subkerel and main kernel caching like this: f[x_] := With[{result = g[x]}, (f[x] = result) /; result =!= Null]; f[x_] := g[x] = f[x] = (Pause[3]; N[Sin[x]]) Packaged up solution We can bundle all these ideas into a single memoization function (see the code at the end of the post). Here is an example use: Clear[f] f[x_] := (Pause[3]; Sin[x]) AbsoluteTiming@ParallelTable[AbsoluteTiming@pm[f][Mod[x, 3]], {x, 15}] (* ==> {6.0683471, {{3.0181726, Sin[1]}, {0.0110007, Sin[2]}, {3.0181726, 0}, {0., Sin[1]}, {3.0191727, Sin[2]}, {3.0181726, 0}, {0.0110007, Sin[1]}, {0., Sin[2]}, {0., 0}, {0., Sin[1]}, {0., Sin[2]}, {0., 0}, {0., Sin[1]}, {0., Sin[2]}, {0., 0}}} *) The function simply needs to be called using pm[f][x] instead of f[x] . Memoized values are associated with f as UpValues , so I thought automatic distribution of definitions should take care of both synchronizing memoized values and clearing them when necessary. Unfortunately this mechanism doesn't seem to be reliable (sometimes it works, sometimes it doesn't), so I provided a function clearParallelCache[f] that will clear all memoized values on all kernels. Caching happens at two levels: on the main kernel level and on subkernels. Computed or main-kernel-cached values are copied to the subkernels as soon as possible. This is visible in the timings of the example above. Sometimes retrieving cached values takes 10 ms, but eventually it becomes very fast. Note that it might happen that the two kernels will each compute the same value (if they start computing it at the same time). This can sometimes be avoided by using a different setting for the Method option of Parallelize (depending on the structure of the input data). For simplicity, I restricted pm to only work with functions that take a single numerical argument (and return anything). This is to avoid having to deal with more complex conditional definitions (especially cases when the function won't evaluate for certain argument types). It could safely be changed to accept e.g. a vector or matrix of values instead. The code pm[f_Symbol][x_?NumericQ] := With[{result = memo[f, x]}, If[result === Null, With[{value = valueOnMain[f, x]}, If[value === Null, f /: memo[f, x] = setValueOnMain[f, x, f[x]], f /: memo[f, x] = value] ], result] ] memo[___] := Null DistributeDefinitions[memo]; valueOnMain[f_, x_] := memo[f, x] setValueOnMain[f_, x_, fx_] := f /: memo[f, x] = fx SetSharedFunction[valueOnMain, setValueOnMain] clearParallelCache[f_Symbol] := (UpValues[f] = {}; ParallelEvaluate[UpValues[f] = {}];)
{ "source": [ "https://mathematica.stackexchange.com/questions/1259", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/386/" ] }
1,276
Is there a pattern to match a list of lists, containing at least one list, with each list containing at least one element? In the Mathematica documentation it says that the pattern x:{___List} will match a list of lists. This is true, but it also matches {} , which is a list of zero lists (or zero anything ). To make sure there is at least one list in my list of lists I can remove one of the underscores to change BlankNullSequence to BlankSequence , i.e. switch to x:{__List} which no longer matches {} . It does, however, match {{}} . How can I also guarantee that each matched sublist is non-empty? Is this even possible, or should I be checking for it programmatically within the Module to which I'm passing arguments of this form?
If I am understanding you: x : {{__} ..} See Repeated for more information and additional options. Also see RepeatedNull while you're there. Make sure you understand BlankSequence and Pattern as well. Here is a breakdown of the expression above. First let us view the FullForm which is as close to the way Mathematica sees it as possible: FullForm[ x:{{__}..} ] Pattern[x, List[ Repeated[ List[ BlankSequence[] ] ] ] ] This expanded form is useful to remove any ambiguity in Mathematica's parsing. Therefore from the inside out we have ( short form : long form : description): __ : BlankSequence[] : one or more arguments with any head { } : List[ ] : inside the head List .. : Repeated[ ] : one or more arguments matching the given pattern { } : List[ ] : inside the head List x: : Pattern[x, ] : a unique expression that matches the given pattern, named x Pay attention to this last point: naming the pattern changes the way it behaves, such that it represents a unique expression. Consider this superficially similar pattern: x : {{a__} ..} This will only match e.g. {{1, 2}, {1, 2}, {1, 2}} but not {{1, 2}, {3}, {4, 5, 6}} because by naming the first sequence 1, 2 all other sequences must be identical. Simply matching the pattern a__ independently is not enough.
{ "source": [ "https://mathematica.stackexchange.com/questions/1276", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/151/" ] }
1,290
I'm new to functional programming of Mathematica and trying to remove one list of assorted elements from another. However I only find functions working with the Index rather than the values itself: list1={b,a,e,f,c,d} list2={f,e,c} I can now remove list2 from list1 : result={b,a,d} I already found out, that you can "abuse" DeleteCases[list1, a] to remove 1 specific element from a list, but not a whole assorted list... I would be very grateful for a simple solution to do it. Thanks a lot for any answer!
Use DeleteCases[list1, Alternatives @@ list2] In new versions (M8.0+), DeleteCases is optimized on patterns not involving blanks, so this will be fast also for large lists. For earlier versions, this will work: Replace[list1, Dispatch[Thread[list2 -> Sequence[]]],{1}] being 2-3 times slower, but still very fast.
{ "source": [ "https://mathematica.stackexchange.com/questions/1290", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/354/" ] }
1,300
I would like to generate a ListPlot with the color for each point in the plot corresponding to a particular value (not associated with the position in the plot). I'd then like to add a legend indicating what the color means. I'm currently solving the first part of the problem by essentially generating a separate ListPlot for each data point and then assigning a color to that ListPlot. Here's a toy example: n = 5000; pos = RandomVariate[NormalDistribution[0, 2], {n, 2}]; altitude = Norm /@ pos; ListPlot[{#} & /@ pos, PlotStyle -> ((Blend[{{Min[altitude], Yellow}, {Max[altitude], Red}}, #] &) /@ altitude), AspectRatio -> 1] So my questions are: (1) Is this the only way to generate such a ListPlot (it does the job but it seems inelegant and I suspect it's inefficient, though that's not a big concern for my application)? (2) Is there an easy way to generate a legend which indicates the value of the color (i.e., a gradient bar which shows the color scale)?
In this case I would use Point for plotting the points. For example n = 5000; pos = RandomVariate[NormalDistribution[0, 2], {n, 2}]; altitude = Norm /@ pos; colorf = Blend[{{Min[altitude], Yellow}, {Max[altitude], Red}}, #] & pl = Graphics[MapThread[{colorf[#1], Point[#2]} &, {altitude, pos}], Axes -> True, AspectRatio -> 1] As for plotting legends, that's a reoccurring issue in Mathematica. There is a package called PlotLegends` which you could try but it is not very user friendly and the legends it produces are quite ugly IMHO. I find that it's often faster to just create a legend by hand. For example, this is a function I use for creating legends with contour plots: plotLegend[{min_, max_}, n_, col_] := Graphics[MapIndexed[{{col[#1], Rectangle[{0, #2[[1]] - 1}, {1, #2[[1]]}]}, {Black, Text[NumberForm[N@#1, {4, 2}], {4, #2[[1]] - .5}, {1, 0}]}} &, Rescale[Range[n], {1, n}, {min, max}]], Frame -> True, FrameTicks -> None, PlotRangePadding -> .5] Here, n is the number of subdivisions and col is the colour function. You could combine the legend with the original plot using Grid , e.g. leg = plotLegend[Through[{Min, Max}[altitude]], 20, colorf]; Grid[{{Show[pl, ImageSize -> {Automatic, 300}], Show[leg, ImageSize -> {Automatic, 250}]}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/1300", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27/" ] }
1,301
I was just evaluating a couple of expressions and started to get errors like this: General::ivar: -1.49994 is not a valid variable. >> General::ivar: -1.43871 is not a valid variable. >> General::ivar: -1.37749 is not a valid variable. >> General::stop: Further output of General::ivar will be suppressed during this calculation. >> I'm doing nothing complicated - currently, I simply did this: f[x_]:=x^2 + 2x + 1 Plot[f[x], {x, -4, 4}] Solve[f[x] == 4] g[x_]:=D[f[x], x] Plot[g[x], {x, -2, 2}] // ^ errors caused by this Actually, this isn't the exact quadratic I am investigating, but it is a quadratic and I expected this to work. I googled, as you'd expect, and found this Stack Overflow question which suggested: Plot[Evaluate[g[x]], {x, -2, 2}] As a workaround. It works - my question is, why doesn't that set of instructions generate that error (I can see it is something to do with replacing, but why is one plot different from the other?) and how can I avoid it? Is there something I should specifically have done in forming g ?
The problem lies in g[x_] := D[f[x], x] ; remember that what SetDelayed (that is, := ) does is to replace stuff on the right corresponding to patterns on the left before evaluating. Thus, when one does something like g[2] (and something like this happens within Plot[] ), you are in fact evaluating D[f[2], 2] , and since one cannot differentiate with respect to a constant ;), you get the General::ivar error message. If you use Set instead (that is, g[x_] = D[f[x], x] ), f[x] is differentiated first before the result of D[] is assigned to g[x_] . Since what's on the right of g[x_] is now an actual function, Plot[] no longer has a reason to complain.
{ "source": [ "https://mathematica.stackexchange.com/questions/1301", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/-1/" ] }
1,310
This is certainly a newbie question but I could not find the answer by searching. I am using Mathematica 8 under Windows 7. I want to use default magnification of 150% for all notebooks and help files which I open in Mathematica. At present I have to do it through Windows->Magnification for every single document I open which is of course tedious. I assume there is a way of setting some options which is saved across different sessions as well. Would appreciate any help.
If the reason you ask is because the fonts are much too small, then there is another approach that is arguably more correct than changing Magnification, and that is to specify a better screen resolution. By default it is 72ppi, but screens haven't been like that for years (mine is about 100ppi). SetOptions[$FrontEndSession, FontProperties -> {"ScreenResolution" -> 96} ] If you like the setting and what to make it persist between sessions, replace $FrontEndSession with $FrontEnd .
{ "source": [ "https://mathematica.stackexchange.com/questions/1310", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/-1/" ] }
1,314
I am an undergraduate student, starting to use Mathematica to model simple physical chemistry problems, as a prelude to a summer internship in a computational/theoretical chemistry group. The solution to a problem I was given (and walked through) now needs to be incorporated in the group's program, which contains >17 Kloc. Before seeing this big program, my experience with Mathematica gave me the impression that it was a very sophisticated graphical calculator. Mathematica is fast, extremely versatile, but its code cannot be compiled and made executable (for what I know). I know one can use the CDF player, but that means the program is not really standalone. Considering my example, I am wondering if there is a point at which one would be better off using another programming language than Mathematica to do the computation. Are there disadvantages or obstacles present when Mathematica programs grow bigger than a certain size?
I think that Mathematica is a great prototyping environment, and has a bright future as a system for both prototyping and implementation of complete components of other systems, from back-ends to front-ends. In my opinion, we are now witnessing the process of it being transferred from pure scientific tool to a general software engineering tool / language. So, I think that moving to another language can be often done pretty late in the development cycle (disclaimer: I have not personally built large systems involving Mathematica as a part - although I worked on large systems written in Java before - so what I write here is mostly an educated guess based on my separate experiences in Mathematica and other languages). The great benefit of Mathematica is that it is a very high-level development environment which can serve as a gluing medium for development of hybrid systems, where different parts are written in different languages. For example, I found it a great testing / development medium for Java applications. This is generally not yet quite apparent since we still lack some tools to boost productivity and overcome cross-lnaguage barriers. But I am more than positive that such tools are going to emerge pretty soon. When you develop the system, what matters is how flexible is your architecture, how testable are your modules, and how fast are the development iterations. A high-level environment like Mathematica is a great win for all these. That said, I would not currently use Mathematica as a central run-time of the application, simply because the kernel crashes every now and then. I would make that another runtime (e.g. Java), which calls Mathematica and handles possible errors, exceptions, crashes and the like (actually, WebMathematica is just that - Mathematica managed by the Java runtime and bundled as a web application for some Java container like Apache Tomcat). Mathematica can however serve as both an excellent back-end and an excellent prototyping environment, so once again, my feeling is that one can benefit a lot from developing even large industrial systems in or with Mathematica. There are actually companies which do just that, and are quite successful. As to when to use C etc - my advice is: as late as you can. Many problems for which Mathematica is perceived as slow can be solved quite efficiently with the knowledge of how to write efficient Mathematica code. May be even more importantly, it is rare that you know the exact method you will use for a given problem, all in advance. Once you switch to C, you will have to deal with lots of low - level details, which will increase development time and chances for errors, plus they will distract you from the essence of the problem you are solving. Even if you switch to C at the end, Mathematica can save you a lot of time in prototyping your solution, and minimize the amount of low-level work you have to do. Scaling to large code bases: This is a problem in pretty much every language. There are probably many factors which determine how well a given language scales. Part of this is also probably not just about language itself, but about existing development tools. For example, Java scales reasonably well, but no one in their right mind would use it for large projects without smart IDE-s. So, I'd set out a few important factors (a list is incomplete, of course): Type system . Strongly typed languages can use the compiler to help find errors, and this will be particularly powerful for those with type inference (ML family languages for example). Means for composition . These include classes / interfaces / inheritance for OO, and higher-order functions / closures / possibly macros for FP. I am biased towards FP here. Means for information hiding, and separation between interface and implementation . This is extremely important, and this is where OO shines, IMO. You can get it in FP, but have to be more disciplined. Package / module system, and namespaces - this is a very important tool for large-scale encapsulation / information-hiding Development tools (IDE-s, debuggers, profilers) - can make a huge difference. Standards of coding and code exchange . When they exist, it makes for much easier code reuse, assuming that you don't write everything yourself. There are probably other important factors I missed. The question is how does Mathematica fare regarding these factors. I'd say that potentially , Mathematica can fare quite well. I think right now it suffers the most from a lack of certain development tools (a really good / useful debugger, for one) and coding / code exchange standards. Also, the programming practices which allow to scale to larger systems, while certainly possible in Mathematica, are not developed / not in widespread use yet. For example, closures and higher-order functions are very useful for that, but it's not something every second Mathematica programmer is using. Also, while Mathematica allows to write macros (functions which manipulate code), its rather complex evaluation control mechanisms make them hard to write. And macros are the extremley powerful scaling tool - in LISP they allow for easy creation of DSL-s because essentially they extend the compiler in the direction you want. Another problematic thing is that Mathematica is often too general, and this generality gets in the way in forms of evaluation and performance surprises. Some intermediate language layer would be a big help here. To summarize, my opinion is this; Mathematica can be used for large projects, even at present (it actually is used for at least two huge ones: it is written largely in itself, and WolframAlpha is another example. From my personal experience, a few of my projects were several thousand lines long), but your code won't scale automatically for you, and you need to be a pretty good Mathematica programmer to be able to manage the complexity of large projects. In this regard, many modern languages provide more automatic tools for scaling the code base, and more tecniques are well-known and in widespread use. I also think, that the situation with Mathematica will improve in the future, we will have better development tools, more programming practices will be shared, etc. So, yes, you definitely can use it for large projects, but right now it won't be as easy as say in Java, Python, or some other well-known languages. Much of it is not at all inherent in Mathematica per se, but reflects its young age as a general-purpose programming language used for larger projects outside academia. My two cents.
{ "source": [ "https://mathematica.stackexchange.com/questions/1314", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/73/" ] }