prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Getting keys with duplicate values in a Groovy map I just came up with this problem while exploring groovy, I have a Map and i wanted to get the keys with the same values from that map. ``` Map maps = ['a': 10, 'b': 10, 'c': 11, 'd': 12] ``` is it possible to get only the keys with same values, what i wanted to get from this example is a list with values: ``` List keys = ['a','b'] ``` I have a solution for this problem and it's a long line of codes, I'm just wondering if it is possible to solve this using `findAll` in Map. I'm having a hard time counting the values in the map. thanks for sharing your ideas.
If you *know* which value you need the keys for, then you can use the `findAll` method to get all the entries that have that value and then get the keys with `keySet` or by using the splat operator as `*.key`: ``` def keysForValue(map, value) { map.findAll { it.value == value }*.key } def map = ['a': 10, 'b': 10, 'c': 11, 'd': 12] assert keysForValue(map, 10) == ['a', 'b'] assert keysForValue(map, 12) == ['d'] assert keysForValue(map, 13) == [] ``` --- In case you don't know which value should have the repeated keys, and all you want is to get the keys that have a repeated value (if there's any), you can try something like: ``` def getKeysWithRepeatedValue(map) { map.groupBy { it.value }.find { it.value.size() > 1 }?.value*.key } ``` It first groups the map entries by value, so the result of `map.groupBy { it.value }` for the example map is `[10:[a:10, b:10], 11:[c:11], 12:[d:12]]`. Then it finds the first entry in that map that has a list with more than one element as a value; that entry corresponds with the value that has more than one key associated with it. The result of `.find { it.value.size() > 1 }` would be the map entry `10={a=10, b=10}`. The last conditional navigation and splat operator `?.value*.key` is to get the value of that entry in case it exists and then get keys of that value. Usage: ``` assert getKeysWithRepeatedValue(['a': 10, 'b': 10, 'c': 11, 'd': 12]) == ['a', 'b'] // If no value has more than one key, returns null: assert getKeysWithRepeatedValue(['a': 10, 'c': 11, 'd': 12]) == null // If more than one value has repeated keys, returns the keys that appear first: assert getKeysWithRepeatedValue(['a': 10, 'b': 11, 'c': 10, 'd': 11]) == ['a', 'c'] ```
Flask end response and continue processing Is there a way in Flask to send the response to the client and then continue doing some processing? I have a few book-keeping tasks which are to be done, but I don't want to keep the client waiting. Note that these are actually really fast things I wish to do, thus creating a new thread, or using a queue, isn't really appropriate here. (One of these fast things is actually adding something to a job queue.)
Sadly teardown callbacks do not execute after the response has been returned to the client: ``` import flask import time app = flask.Flask("after_response") @app.teardown_request def teardown(request): time.sleep(2) print("teardown_request") @app.route("/") def home(): return "Success!\n" if __name__ == "__main__": app.run() ``` When curling this you'll note a 2s delay before the response displays, rather than the curl ending immediately and then a log 2s later. This is further confirmed by the logs: ``` teardown_request 127.0.0.1 - - [25/Jun/2018 15:41:51] "GET / HTTP/1.1" 200 - ``` The correct way to execute after a response is returned is to use WSGI middleware that adds a hook to the [close method of the response iterator](https://www.python.org/dev/peps/pep-0333/#specification-details). This is not quite as simple as the `teardown_request` decorator, but it's still pretty straight-forward: ``` import traceback from werkzeug.wsgi import ClosingIterator class AfterResponse: def __init__(self, app=None): self.callbacks = [] if app: self.init_app(app) def __call__(self, callback): self.callbacks.append(callback) return callback def init_app(self, app): # install extension app.after_response = self # install middleware app.wsgi_app = AfterResponseMiddleware(app.wsgi_app, self) def flush(self): for fn in self.callbacks: try: fn() except Exception: traceback.print_exc() class AfterResponseMiddleware: def __init__(self, application, after_response_ext): self.application = application self.after_response_ext = after_response_ext def __call__(self, environ, start_response): iterator = self.application(environ, start_response) try: return ClosingIterator(iterator, [self.after_response_ext.flush]) except Exception: traceback.print_exc() return iterator ``` Which you can then use like this: ``` @app.after_response def after(): time.sleep(2) print("after_response") ``` From the shell you will see the response return immediately and then 2 seconds later the `after_response` will hit the logs: ``` 127.0.0.1 - - [25/Jun/2018 15:41:51] "GET / HTTP/1.1" 200 - after_response ``` This is a summary of a previous answer provided [here](https://stackoverflow.com/questions/48994440/execute-a-function-after-flask-returns-response/51013358#51013358).
Missing members from default constructor arguments The following class has an auxillary constructor to change one property in an immutable way. ``` class AccUnit(size: Long, start: Date, direction:Direction, protocol:String) { def this(size:Long, that:AccUnit) {this(size, that.start, that.direction, that.protocol)} } ``` Compiler returns an errors: ``` AccUnit.scala:26: error: value start is not a member of trafacct.AccUnit def this(size:Long, that:AccUnit) {this(size, that.start, that.direction, that.protocol)} ^ AccUnit.scala:26: error: value direction is not a member of trafacct.AccUnit def this(size:Long, that:AccUnit) {this(size, that.start, that.direction, that.protocol)} ^ AccUnit.scala:26: error: value protocol is not a member of trafacct.AccUnit def this(size:Long, that:AccUnit) {this(size, that.start, that.direction, that.protocol)} ``` Why does it think, there is no such members?
Because it should be ``` class AccUnit(val size: Long, val start: Date, val direction:Direction, val protocol:String) {...} ``` or ``` case class AccUnit(size: Long, start: Date, direction:Direction, protocol:String) {...} ``` In your version, `size` and others are only constructor arguments, but not members. UPDATE: You can check it yourself: ``` // Main.scala class AccUnit(size: Long, protocol: String) F:\MyProgramming\raw>scalac Main.scala F:\MyProgramming\raw>javap -private AccUnit Compiled from "Main.scala" public class AccUnit extends java.lang.Object implements scala.ScalaObject{ public AccUnit(long, java.lang.String); } ```
Python: How do I sort array X but carry the same relative sort over to Y? For example ``` X=[5,6,2,3,1] Y=[7,2,3,4,6] ``` I sort X: ``` X=[1,2,3,5,6] ``` But I want the same relative sort applied to Y so the numbers stay in the same positions relative to each other as before: ``` Y=[6,3,4,7,2] ``` I hope this makes sense!
Usually, you do a `zip`-`sort`-`unzip` for this ``` >>> X = [5,6,2,3,1] >>> Y = [7,2,3,4,6] ``` Now sort them together: ``` >>> sorted(zip(X,Y)) [(1, 6), (2, 3), (3, 4), (5, 7), (6, 2)] ``` Pair that with a "unzip" (`zip(*...)`) ``` >>> zip(*sorted(zip(X,Y))) [(1, 2, 3, 5, 6), (6, 3, 4, 7, 2)] ``` which you could unpack: ``` >>> X,Y = zip(*sorted(zip(X,Y))) >>> X (1, 2, 3, 5, 6) >>> Y (6, 3, 4, 7, 2) ``` Now you have `tuple` instead of `list` objects, but if you really need to, you can convert it back. --- As pointed out in the comments, this does introduce a very slight dependence on the second list in the sort: Consider the lists: ``` X = [1,1,5,7] #sorted already Y = [2,1,4,6] #Not already sorted. ``` With my "recipe" above, at the end of the day, you'll get: ``` X = (1,1,5,7) Y = (1,2,4,6) ``` which might be unexpected. To fix that, you could pass a `key` argument to `sorted`: ``` from operator import itemgetter X,Y = zip(*sorted(zip(X,Y),key=itemgetter(0))) ``` Demo: ``` >>> X [1, 1, 5, 7] >>> Y [2, 1, 4, 6] >>> XX,YY = zip(*sorted(zip(X,Y))) >>> XX (1, 1, 5, 7) >>> YY (1, 2, 4, 6) >>> from operator import itemgetter >>> XX,YY = zip(*sorted(zip(X,Y),key=itemgetter(0))) >>> XX (1, 1, 5, 7) >>> YY (2, 1, 4, 6) ```
Is certain matlab-routine used in matlab script? I am running a big m-file that I didn't write myself and that depends on certain subfunctions. I want to know if anywhere in all nested functions a particular function (in my case the function eig.m (to calculate eigenvalues) ) is used. Is there a quick way to do this? kind regards, Koen
You can use the **semi-documented** function `getcallinfo` (see [Yair Altman's blog](http://undocumentedmatlab.com/blog/function-definition-meta-info) for more information about it): > > `getcallinfo` > > > Returns **called functions** and their first and last lines > > This function is unsupported and might change or be removed without > notice in a future version. > > > ## General use of `getcallinfo` Let's create an **example script** which contains **subfunctions** (this works [in Matlab R2016b](https://es.mathworks.com/help/matlab/matlab_prog/create-functions-in-files.html) or newer) and save it as `'filename.m`'. The procedure also works if there are **nested functions**, or if the main file is a **function** instead of a script. ``` x = input(''); y = find(x); z = f(norm(x)); disp(z) function u = f(v) u = -log2(v) + log2(pi); end ``` Then: ``` >> g = getcallinfo('filename.m'); ``` gives you a **nested struct array** with interesting information, including function calls. The first entry, `g(1)`, refers to the main file. There may be further entries for subfunctions or nested functions. In this case, `g(2)` refers to subfunction `f`. ``` >> g(1).calls.fcnCalls ans = struct with fields: names: {'input' 'find' 'norm' 'disp' 'log2' 'log2' 'pi'} lines: [1 2 3 4 6 6 6] >> g(1).calls.innerCalls ans = struct with fields: names: {'f'} lines: 3 >> g(2).calls.fcnCalls ans = struct with fields: names: {'log2' 'log2' 'pi'} lines: [6 6 6] >> g(2).calls.innerCalls ans = struct with fields: names: {1×0 cell} lines: [1×0 double] ``` Other fields of `g` give **further details**, such as name ``` >> g(1).name ans = filename >> g(2).name ans = f ``` or type ``` >> g(1).type ans = Script with no properties. >> g(2).type ans = subfunction ``` ## How to determine if a given function is used anywhere in the file Obtain `g` as explained above, and then look for the desired function name in all `calls.fcnCalls.names` fields of `g`: ``` g = getcallinfo('filename.m'); sought_function = 'log2'; % or 'eig' in your case t = arrayfun(@(x) x.calls.fcnCalls.names, g, 'UniformOutput', false); % collect all names of called functions. Gives a cell array of cell arrays % of strings (character vectors) t = [t{:}]; % de-nest: concatenate into cell array of strings result = any(strcmp(t, sought_function)); % compare with sought function name ```
What is "( set -o posix ; set ) | less " doing? On my search for a command to list all shell variables, I somehow realized, that there is a command to list all environment variables, but somehow there is no one to list all shell variables, for reasons unknown to me. However, someone here gave an answer on how to display all variables, shell and environment ones. ``` ( set -o posix ; set ) | less ``` He actually did not explain for the layman what this expression does, and my fragmentary understanding is not enough to grasp the idea behind it. This is what I know: - **( command1; command2)** this causes the commands to be executed inside a child process of the shell. - **set** is some way to declare variables, though do not know what the **-o posix** means and why a second **set** is executed in succession - **command | less** This one is not the problem, even I understand it, it is a pager for more control about the output.
`set` shows all shell variables (exported or not). In Bash, `set -o posix` sets the shell in [POSIX compatibility mode](https://www.gnu.org/software/bash/manual/html_node/Bash-POSIX-Mode.html#Bash-POSIX-Mode). (I don't know if other shells have similar syntax for the similar feature, but I'll assume Bash here.) The difference in this case is that usually Bash's `set` shows also shell functions, but in POSIX mode `set` only shows variables, and changes the output format slightly: > > 44. When the set builtin is invoked without options, it does not display shell function names and definitions. > 45. When the set builtin is invoked without options, it displays variable values without quotes, unless they contain shell metacharacters, even if the result contains nonprinting characters. > > > In Bash, there's additionally the [`declare` builtin](https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html#Bash-Builtins) that can be used to show all the otherwise hidden or Bash-specific flags of variables: `declare -p xx` shows variable `xx` in a format that Bash can take as input. `declare -p` shows all variables and `declare -f` can be used to show functions.
How do I send an array of objects from one Activity to another? I have a class like: ``` Class persona implements Serializable { int age; String name; } ``` And my first Activity fill an array: ``` persona[] p; ``` Then, I need this info in another Activity. **How I can send it?** I try to make: ``` Bundle b = new Bundle(); b.putSerializable("persona", p); ``` But I Can't.
AFAIK the is no method that put a serializable array into bundle any way here is a solution to use that uses parcel change you class to this ``` import android.os.Parcel; import android.os.Parcelable; public class persona implements Parcelable { int age; String name; public static final Parcelable.Creator<persona> CREATOR = new Creator<persona>() { @Override public persona[] newArray(int size) { // TODO Auto-generated method stub return new persona[size]; } @Override public persona createFromParcel(Parcel source) { // TODO Auto-generated method stub return new persona(source); } }; public persona(Parcel in) { super(); age = in.readInt(); name = in.readString(); } public persona() { super(); // TODO Auto-generated constructor stub } @Override public int describeContents() { // TODO Auto-generated method stub return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeInt(age); dest.writeString(name); } } ``` then you can send the array like this ``` Bundle b = new Bundle(); b.putParcelableArray("persona", p); ``` btw using Parcelable instead of Serializable is more efficient in Android
Full outer join of two or more data frames Given the following three Pandas data frames, I need to merge them similar to an SQL full outer join. Note that the key is multi-index `type_N` and `id_N` with `N` = 1,2,3: ``` import pandas as pd raw_data = { 'type_1': [0, 1, 1,1], 'id_1': ['3', '4', '5','5'], 'name_1': ['Alex', 'Amy', 'Allen', 'Jane']} df_a = pd.DataFrame(raw_data, columns = ['type_1', 'id_1', 'name_1' ]) raw_datab = { 'type_2': [1, 1, 1, 0], 'id_2': ['4', '5', '5', '7'], 'name_2': ['Bill', 'Brian', 'Joe', 'Bryce']} df_b = pd.DataFrame(raw_datab, columns = ['type_2', 'id_2', 'name_2']) raw_datac = { 'type_3': [1, 0], 'id_3': ['4', '7'], 'name_3': ['School', 'White']} df_c = pd.DataFrame(raw_datac, columns = ['type_3', 'id_3', 'name_3']) ``` The expected result should be: ``` type_1 id_1 name_1 type_2 id_2 name_2 type_3 id_3 name_3 0 3 Alex NaN NaN NaN NaN NaN NaN 1 4 Amy 1 4 Bill 1 4 School 1 5 Allen 1 5 Brian NaN NaN NaN 1 5 Allen 1 5 Joe NaN NaN NaN 1 5 Jane 1 5 Brian NaN NaN NaN 1 5 Jane 1 5 Joe NaN NaN NaN NaN NaN NaN 0 7 Bryce 0 7 White ``` How can this be achieved in Pandas?
I'll propose that you make life less complicated and not have different names for the things you want to merge on. ``` da = df_a.set_index(['type_1', 'id_1']).rename_axis(['type', 'id']) db = df_b.set_index(['type_2', 'id_2']).rename_axis(['type', 'id']) dc = df_c.set_index(['type_3', 'id_3']).rename_axis(['type', 'id']) da.join(db, how='outer').join(dc, how='outer') name_1 name_2 name_3 type id 0 3 Alex NaN NaN 7 NaN Bryce White 1 4 Amy Bill School 5 Allen Brian NaN 5 Allen Joe NaN 5 Jane Brian NaN 5 Jane Joe NaN ``` --- Here's an obnoxious way to get those other columns ``` from cytoolz.dicttoolz import merge i = pd.DataFrame(d.index.values.tolist(), d.index, d.index.names) d = d.assign(**merge( i.mask(d[f'name_{j}'].isna()).add_suffix(f'_{j}').to_dict('l') for j in [1, 2, 3] )) d[sorted(d.columns, key=lambda x: x.split('_')[::-1])] id_1 name_1 type_1 id_2 name_2 type_2 id_3 name_3 type_3 type id 0 3 3 Alex 0.0 NaN NaN NaN NaN NaN NaN 7 NaN NaN NaN 7 Bryce 0.0 7 White 0.0 1 4 4 Amy 1.0 4 Bill 1.0 4 School 1.0 5 5 Allen 1.0 5 Brian 1.0 NaN NaN NaN 5 5 Allen 1.0 5 Joe 1.0 NaN NaN NaN 5 5 Jane 1.0 5 Brian 1.0 NaN NaN NaN 5 5 Jane 1.0 5 Joe 1.0 NaN NaN NaN ```
fopen(file,w+) truncates the file before I can check if it's locked with flock() I have a function which receives a filename and a json object to write to a text file. The object is updated and needs to entirely replace the current contents of the file. Each site visitor has their own file. Multiple rapid changes create a situation where the file is truncated by `fopen(file,w+)`, then not written to as it's locked. End result is empty file. I'm sure there's a standard simply way to do this as it's such a usual activity. Ideally what I'm looking for is a way to check if a file has a lock before truncating the file with `fopen` in `w+` mode or a way to switch modes. It seems strange that you would have to truncate the file with `fopen()` to get a file handle to pass to `flock()` to check if it's locked -- but you just truncated it, so what's the point? Here's the function I have so far: ``` function updateFile($filename, $jsonFileData) { $fp = fopen($filename,"w+"); if (flock($fp, LOCK_EX)) { fwrite($fp, $jsonFileData); flock($fp, LOCK_UN); fclose($fp); return true; } else { fclose($fp); return false; } } ```
Example #1 from the [PHP manual](http://us3.php.net/flock) will do what you want with a slight modification. Use the [`"c"` mode](http://us3.php.net/manual/en/function.fopen.php) to open the file for writing, create it if it doesn't exist, and don't truncate it. ``` $fp = fopen("/tmp/lock.txt", "c"); if (flock($fp, LOCK_EX)) { // acquire an exclusive lock ftruncate($fp, 0); // truncate file fwrite($fp, "Write something here\n"); fflush($fp); // flush output before releasing the lock flock($fp, LOCK_UN); // release the lock } else { echo "Couldn't get the lock!"; } fclose($fp); ``` Full description of the [`"c"` mode](http://us3.php.net/manual/en/function.fopen.php): > > Open the file for writing. If the file does not exist, it is created. If it exists, it is neither truncated (as opposed to 'w'), nor the call to this function fails (as is the case with 'x'). The file pointer is positioned on the beginning of the file. This may be useful if it's desired to get an advisory lock (see [`flock()`](http://us3.php.net/flock)) before attempting to modify the file, as using 'w' could truncate the file before the lock was obtained (if truncation is desired, ftruncate() can be used after the lock is requested). > > > It doesn't look like you need it, but there's also a corresponding `"c+"` mode if you want to both read and write.
Inserting RecyclerView items at zero position - always stay scrolled to top I have a pretty standard `RecyclerView` with a vertical `LinearLayoutManager`. I keep inserting new items at the top and I'm calling `notifyItemInserted(0)`. **I want the list to stay scrolled to the top**; to always display the 0th position. From my requirement's point of view, the `LayoutManager` behaves differently based on the number of items. While all items fit on the screen, it looks and behaves as I expect: The **new item always appears on top and shifts everything** below it. [![Behavior with few items: Addition shifts other items down, first (newest) item is always visible](https://i.stack.imgur.com/reTZu.gif)](https://i.stack.imgur.com/reTZu.gif) However, as soon as the no. of items exceeds the RecyclerView's bounds, **new items are added above** the currently visible one, but the **visible items stay in view**. The user has to scroll to see the newest item. [![Behavior with many items: New items are added above the top boundary, user has to scroll to reveal them](https://i.stack.imgur.com/1IW9g.gif)](https://i.stack.imgur.com/1IW9g.gif) This behavior is totally understandable and fine for many applications, but not for a "live feed", where seeing the most recent thing is more important than "not distracting" the user with auto-scrolls. --- I know this question is almost a duplicate of [Adding new item to the top of the RecyclerView](https://stackoverflow.com/questions/38850591/adding-new-item-to-the-top-of-the-recyclerview)... but all of the proposed answers are mere **workarounds** (most of them quite good, admittedly). I'm looking for a way to actually **change this behavior**. I want the LayoutManager to **act exactly the same, no matter the number of items**. I want it to always shift all of the items (just like it does for the first few additions), not to stop shifting items at some point, and compensate by smooth-scrolling the list to the top. Basically, no `smoothScrollToPosition`, no `RecyclerView.SmoothScroller`. Subclassing `LinearLayoutManager` is fine. I'm already digging through its code, but without any luck so far, so I decided to ask in case someone already dealt with this. Thanks for any ideas! --- **EDIT:** To clarify why I'm dismissing answers from the linked question: Mostly I'm concerned about animation smoothness. Notice in the first GIF where `ItemAnimator` is moving other items while adding the new one, both fade-in and move animations have the same duration. But when I'm "moving" the items by smooth scrolling, I **cannot easily control the speed of the scroll**. Even with default `ItemAnimator` durations, this doesn't look as good, but in my particular case, I even needed to slow down the `ItemAnimator` durations, which makes it even worse: [![Insert "fixed" with smooth scroll + ItemAnimator durations increased](https://i.stack.imgur.com/NeUuj.gif)](https://i.stack.imgur.com/NeUuj.gif)
*Although I wrote this answer and this is the accepted solution, I suggest a look at the other later answers to see if they work for you before attempting this.* --- When an item is added to the top of the `RecyclerView` and the item can fit onto the screen, the item is attached to a view holder and `RecyclerView` undergoes an animation phase to move items down to display the new item at the top. If the new item cannot be displayed without scrolling, a view holder is not created so there is nothing to animate. The only way to get the new item onto the screen when this happens is to scroll which causes the view holder to be created so the view can be laid out on the screen. (There does seem to be an edge case where the view is partially displayed and a view holder is created, but I will ignore this particular instance since it is not germane.) So, the issue is that two different actions, animation of an added view and scrolling of an added view, must be made to look the same to the user. We could dive into the underlying code and figure out exactly what is going on in terms of view holder creation, animation timing, etc. But, even if we can duplicate the actions, it can break if the underlying code changes. This is what you are resisting. An alternative is to add a header at position zero of the `RecyclerView`. You will always see the animation when this header is displayed and new items are added to position 1. If you don't want a header, you can make it zero height and it will not display. The following video shows this technique: [![[video]](https://i.stack.imgur.com/1o6uO.gif)](https://i.stack.imgur.com/1o6uO.gif) This is the code for the demo. It simply adds a dummy entry at position 0 of the items. If a dummy entry is not to your liking, there are other ways to approach this. You can search for ways to add headers to `RecyclerView`. *(If you do use a scrollbar, it will misbehave as you can probably tell from the demo. To fix this 100%, you will have to take over a lot of the scrollbar height and placement computation. The custom `computeVerticalScrollOffset()` for the `LinearLayoutManager` takes care of placing the scrollbar at the top when appropriate. (Code was introduced after video taken.) The scrollbar, however, jumps when scrolling down. A better placement computation would take care of this problem. See [this Stack Overflow question](https://stackoverflow.com/questions/46033473/recyclerview-with-items-of-different-height-scrollbar) for more information on scrollbars in the context of varying height items.)* **MainActivity.java** ``` public class MainActivity extends AppCompatActivity implements View.OnClickListener { private TheAdapter mAdapter; private final ArrayList<String> mItems = new ArrayList<>(); private int mItemCount = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); RecyclerView recyclerView = (RecyclerView) findViewById(R.id.recyclerView); LinearLayoutManager layoutManager = new LinearLayoutManager(this, LinearLayoutManager.VERTICAL, false) { @Override public int computeVerticalScrollOffset(RecyclerView.State state) { if (findFirstCompletelyVisibleItemPosition() == 0) { // Force scrollbar to top of range. When scrolling down, the scrollbar // will jump since RecyclerView seems to assume the same height for // all items. return 0; } else { return super.computeVerticalScrollOffset(state); } } }; recyclerView.setLayoutManager(layoutManager); for (mItemCount = 0; mItemCount < 6; mItemCount++) { mItems.add(0, "Item # " + mItemCount); } // Create a dummy entry that is just a placeholder. mItems.add(0, "Dummy item that won't display"); mAdapter = new TheAdapter(mItems); recyclerView.setAdapter(mAdapter); } @Override public void onClick(View view) { // Always at to position #1 to let animation occur. mItems.add(1, "Item # " + mItemCount++); mAdapter.notifyItemInserted(1); } } ``` **TheAdapter.java** ``` class TheAdapter extends RecyclerView.Adapter<TheAdapter.ItemHolder> { private ArrayList<String> mData; public TheAdapter(ArrayList<String> data) { mData = data; } @Override public ItemHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view; if (viewType == 0) { // Create a zero-height view that will sit at the top of the RecyclerView to force // animations when items are added below it. view = new Space(parent.getContext()); view.setLayoutParams(new ViewGroup.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, 0)); } else { view = LayoutInflater.from(parent.getContext()) .inflate(R.layout.list_item, parent, false); } return new ItemHolder(view); } @Override public void onBindViewHolder(final ItemHolder holder, int position) { if (position == 0) { return; } holder.mTextView.setText(mData.get(position)); } @Override public int getItemViewType(int position) { return (position == 0) ? 0 : 1; } @Override public int getItemCount() { return mData.size(); } public static class ItemHolder extends RecyclerView.ViewHolder { private TextView mTextView; public ItemHolder(View itemView) { super(itemView); mTextView = (TextView) itemView.findViewById(R.id.textView); } } } ``` **activity\_main.xml** ``` <android.support.constraint.ConstraintLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".MainActivity"> <android.support.v7.widget.RecyclerView android:id="@+id/recyclerView" android:layout_width="0dp" android:layout_height="0dp" android:scrollbars="vertical" app:layout_constraintBottom_toTopOf="@+id/button" app:layout_constraintStart_toStartOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintTop_toTopOf="parent" /> <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="8dp" android:layout_marginEnd="8dp" android:layout_marginStart="8dp" android:text="Button" android:onClick="onClick" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" /> </android.support.constraint.ConstraintLayout> ``` **list\_item.xml** ``` <LinearLayout android:id="@+id/list_item" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="16dp" android:orientation="horizontal"> <View android:id="@+id/box" android:layout_width="50dp" android:layout_height="50dp" android:layout_marginStart="16dp" android:background="@android:color/holo_green_light" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <TextView android:id="@+id/textView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginStart="16dp" android:textSize="24sp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintStart_toEndOf="@id/box" app:layout_constraintTop_toTopOf="parent" tools:text="TextView" /> </LinearLayout> ```
Best practices for shared constants between server and client using Backbone.JS What is the best way to handle shared constants between server code and client code using Backbone.JS ? For example, say I have this map of user roles: ``` user_roles = { 1 => "member", 2 => "moderator", 3 => "admin"} ``` Obviously, if you duplicate these definitions in code at both client and server side, this doesn't scale well and is error-prone. The solution I could come up with is simply expose these definitions as a Backbone.Collection or Backbone.Model and fetch them from the server, but this might cause undesired overhead if you have a large amount of constant types, and I'm not sure whether they actually belong inside the model at all. What are the different solutions to solve this problem, and how well do they scale?
I've tried a couple of different ways of handling this issue. I don't know if either of them is the **best** way, but both have worked well for me, so I'll just describe them here and hope they're of help. The core concept is the same in both: Constants are defined as **true constants** in the server-side language (C# and Java in my case), and are converted to JSON or javascript for the benefit of the client. I think this is the way to go, as opposed to sharing a single JSON/YML/etc. configuration file. Just because javascript doesn't have true constants doesn't mean your server shouldn't have them either. **Option 1: Load constants and enumerations in runtime via web service call.** Create a service endpoint (let's call it `/enums`) that basically collects all the server-side enumerations and constants into one big clump of JSON. To avoid an extra service call, if you're also using some server-side templating framework, you can bootstrap this into your `index.html`. If you don't want to bootstrap anything to your static content, there are other optimizations you can do. Because constants change so rarely, you can wrap `/enums` response into a wrapper object that includes the server application build version. For example: ``` { "version": "1.2.1.0", "enums": { ... } } ``` When the user hits the page for the first time, request `GET /enums`, and store the whole response into browser's local storage. On subsequent visits read the constants from the local storage and request the constants with `GET /enums?v=1.2.1.0`. The server should compare it's version to the passed version, and if they were identical, it would only return `HTTP 200 OK` to indicate to the client that its enumerations were still valid. This option is good if you work in a distributed environment where frontend- and backend-developers are using different tools, or don't generally work closely together. **Option 2: Share constants as a part of a build process** You can use text transformation templates (such as [T4](http://msdn.microsoft.com/en-us/library/bb126445.aspx)) to generate javascript code from server side language source. In my case this was C#. I stored all server-side enumerations in one directory and ran a build task which transformed all C# source files in that directory into javascript objects and combined them into one file `enums.js` in the client side source tree. I found this to be a preferrable option, but if the client and server development is not done synchronously (built together, released together), the dependency management can get pretty messy. In my case I always deployed both client and server together, so this worked out great.
Does equal RAM size sticks yield bettter performance? Is it important to have all the ram sticks with an equal memory size? I was told that upgrading our server from **64GB** (8 x 8GB) to **72GB** (7x 8GB + 1x 16GB) will not yield any performance gain, since the ram sticks will not all be the same size. Is this true? If yes, can you please explain why? Server specs: Dell Power Edge Intel Nehalem Dual Socket Quad Core 2.26 GHz. running on Debian. *(Is there is a difference running a different OS such as windows?)*
There is a tradeoff. You need to decide if it's worth it. What you might be giving up (depending on your existing configuration) is [dual or triple channel mode](http://en.wikipedia.org/wiki/Dual-channel_architecture) for your RAM. Effectively, it will make all of your RAM operate more slowly. In most cases, having more RAM will outweigh having faster RAM, but there comes of a point of diminishing returns where the speed of the RAM starts to play a larger factor. It's hard to give actual numbers on where the trade-off point is because there are so many factors: the base (single channel) speed of your RAM relative to the speed of your disk subsystem, the amount of each and how much RAM you are adding, how much RAM your application actually wants to use, how the cache hit/miss rate changes as your approach that last value, how good your memory controller is, etc. All else being equal, faster RAM is of course better than slower RAM, and you should strive for matched sticks where possible. But in general, your first job is to get enough RAM in there that, once loaded, your application only rarely needs to hit the disk for page fault reasons. Worry about dual or triple channel options after that.
Gradle dependencies difference between compile, apk project, compile project,provided,implementation project Gradle dependencies difference between. ``` compile apk project compile project provided project implementation ``` My questions are What's the difference between `compile` ,`apk project`, `compile project`,`provided project` here?
There's two separate things to discuss here: Dependency Configurations and Dependency Sources. **Dependency Configurations** Configurations help define the transitivity of a dependency, which in turn removes the pain of having to discover and specify the libraries your own project/library requires, including them automatically. This notion of configurations in gradle is very similar to that of [Maven's scopes](http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope): 1. `compile`: Compile dependencies are available in all classpaths of a project. Furthermore, those dependencies are propagated to dependent projects. A compile-time dependency is generally required at runtime. 2. `apk`: Defines a runtime dependency. A dependency with this scope will not be required at compile time, but it will be for execution. This means that you can save time while compiling and still have the dependency available when your project actually runs. [This](https://stackoverflow.com/a/34835186/1221286) is a good example of when to use an apk dependency. 3. `provided`: It means that this dependency is available on the runtime environment. As a consequence, this scope is only available on the compilation and test classpath, and is not transitive. It is not supported on Android projects, though you can workaround it by defining your own configuration as discussed [here](https://stackoverflow.com/a/10406184/1221286). There are more configurations that you can encounter on Android, such as `testCompile`, which allows you to specify a compile-time dependency that will only be used for testing, say you want to use junit in your tests, then you would do as follows: ``` testCompile 'junit:junit:4.12' ``` **Dependency Source** Once you understand the configurations available for you, you need to specify an actual dependency. Dependencies might be internal or external, you may rely on another library you are working on, as well as on publicly available libraries. Here's where the `project` keyword comes in, allowing you to specify a dependency to an internal module or library. By defining a dependency as `compile project`, you are adding that module or library as a transitive dependency to your project. Assume you have a project `messages` with three modules (`producer`, `consumer` and `shared`), the project structure would look as follows: ``` messages/ build.gradle settings.gradle consumer/ build.gradle producer/ build.gradle shared/ build.gradle ``` Now assume that both `consumer` and `producer` store messages in json format and that you want to use [google-gson](https://github.com/google/gson) for that purpose. Assume that both projects have some common source code that they depend on, your `shared` module. `consumer`'s build.gradle could then define the following dependencies: ``` dependencies { // Internal dependency to project shared compile project (':shared') // External dependency to publicly available library, // through public repositories such as jcenter() or mavencentral() compile 'com.google.code.gson:gson:1.7.2' } ``` To sum up, it is the combination of both **configurations** and **sources** that enables you to declare dependencies as `compile`, `compile project`, `apk project` and more!
Parse buffer image to base64 on React Native I am trying to show an image gotten from a server in a React Native app. What I am getting in the response look like this: [![enter image description here](https://i.stack.imgur.com/WIfAu.png)](https://i.stack.imgur.com/WIfAu.png) I tried to build a Buffer object using [buffer](https://www.npmjs.com/package/buffer) and then parse to base64 ``` const imageBuffer = Buffer.from(JSON.stringify(res.data)) // This res.data is the whole object, including the type "Buffer" and the data array const imageBase64 = imageBuffer.toString('base64') setImage(imageBase64) ``` This returns a base64 image but it doesn't show using the React Native Image component ``` <Image source={{ uri: `data:image/jpeg;base64,${image}` }} /> ``` I didn't found how to handle images with this structure (a buffer as a numbers array) on React Native. I was thinking that maybe it is a way to do this without the library mentioned or without parse the buffer to a base64 but I don't know. Thank you
I dont see any width and height provided to your image, this is the first common issue I faced and wasted my whole day, after providing the width and height, even if the image is not shown see the below solution I have used this function to convert BufferArray into Base64 source [link](https://gist.github.com/Deliaz/e89e9a014fea1ec47657d1aac3baa83c) ``` arrayBufferToBase64 = buffer => { let binary = ''; let bytes = new Uint8Array(buffer); let len = bytes.byteLength; for (let i = 0; i < len; i++) { binary += String.fromCharCode(bytes[i]); } return window.btoa(binary); }; ``` I have utilized the above function like this ``` <Image style={{ width: 200, height: 200, resizeMode: 'cover', backgroundColor: 'red', }} source={{ uri: 'data:image/jpeg;base64,' + this.arrayBufferToBase64(data.data), //data.data in your case }} /> ```
Qt: out-of-line definition of "closeEvent" does not match any declaration in "MainWindow" I just want to ignore the closing event thrown by the user in Qt, C++ I tried what is already in [the docs of Qt](https://doc.qt.io/qt-5/qwidget.html#closeEvent) ``` void MainWindow::closeEvent(QCloseEvent *event) { event->ignore(); } ``` But this throws me the error `out-of-line definition of "closeEvent" does not match any declaration in "MainWindow"`. I'd expect to ignore the closing event. The header file: ``` #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QWidget> namespace Ui { class MainWindow; } class MainWindow : public QWidget { Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); ~MainWindow(); private: Ui::MainWindow *ui; }; - #endif // MAINWINDOW_H ```
The solution is that you need to declare in the header file that you will override the closeEvent. In the docs you can se that is virtual protected, that means that you can override it. Your code should be the following: The header file: ``` #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QWidget> namespace Ui { class MainWindow; } class MainWindow : public QWidget { Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); ~MainWindow(); protected: void closeEvent(QCloseEvent *event) override; private: Ui::MainWindow *ui; }; - #endif // MAINWINDOW_H ``` The cpp file: ``` void MainWindow::closeEvent(QCloseEvent *event) { event->ignore(); } ```
Swipe to Refresh hides before loading the WebView I implemented swipe to refresh for my WebView and its working fine. But there is one problem which i am not able to solve. The problem is my Swipe to Refresh hides after 6 seconds. It did not remain there till the loading for WebView is completed. What i want is that Swipe to refresh should remain visible till the page fully loads. **My Implementations** ``` swipeView = (SwipeRefreshLayout) view.findViewById(R.id.swipe); myWebView = (WebView) view.findViewById(R.id.webview); myWebView.loadUrl("http://m.facebook.com"); swipeView.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() { @Override public void onRefresh() { swipeView.setRefreshing(true); ( new Handler()).postDelayed(new Runnable() { @Override public void run() { swipeView.setRefreshing(false); myWebView.loadUrl("http://m.facebook.com"); } }, 6000); }}); ```
Use WebView listener and finish your Swipe in the onPageFinished listener.. For example like below ``` mWebView.setWebViewClient(new WebViewClient() { public void onPageFinished(WebView view, String url) { // do your stuff here swipeView.setRefreshing(false); } }); ``` For your case, change your code like below, ``` swipeView = (SwipeRefreshLayout) view.findViewById(R.id.swipe); myWebView = (WebView) view.findViewById(R.id.webview); mWebView.setWebViewClient(new WebViewClient() { public void onPageFinished(WebView view, String url) { // do your stuff here swipeView.setRefreshing(false); } }); swipeView.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() { @Override public void onRefresh() { myWebView.loadUrl("http://m.facebook.com"); }}); ```
Print directory tree but exclude a folder on windows cmd I want to print a directory tree excluding a folder. I already know the basic way to print the tree like this: ``` tree /A > tree.txt ``` I want to achieve something like this: ``` tree /A [exclude folder node_modules] > tree.txt ```
The standard [`tree.com`](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/tree) utility does *not* support excluding directories. - If you only need to **exclude directories by name *themselves* and *not* also *their entire subtree*** (child directories and their descendants), see [nferrell's answer](https://stackoverflow.com/a/43815420/45375). - If you need to **exclude the *entire subtree* of directories matching a given name**, more work is needed - **see below**. Below is the source code for PowerShell function `tree`, which emulates the behavior of `tree.com` command, while also: - offering selective exclusion of subtrees by name Note: You may specify multiple names separated by `,` and the names can be wildcard patterns - note that they only apply to the directory *name*, however, not the full path. - offering cross-platform support Note: Be sure to save your script with UTF-8 encoding *with BOM* for the script to function properly without `-Ascii`. - offering switch `-IncludeFiles` to also print *files*. With the function below loaded, the desired command looks like this: ``` tree -Exclude node_modules -Ascii > tree.txt ``` Run `tree -?` or `Get-Help tree` for more information. --- #### `tree` source code (add to your `$PROFILE`, for instance; PSv4+): ``` function tree { <# .SYNOPSIS Prints a directory's subtree structure, optionally with exclusions. #' .DESCRIPTION Prints a given directory's subdirectory structure recursively in tree form, so as to visualize the directory hierarchy similar to cmd.exe's built-in 'tree' command, but with the added ability to exclude subtrees by directory names. NOTE: Symlinks to directories are not followed; a warning to that effect is issued. .PARAMETER Path The target directory path; defaults to the current directory. You may specify a wildcard pattern, but it must resolve to a single directory. .PARAMETER Exclude One or more directory names that should be excluded from the output; wildcards are permitted. Any directory that matches anywhere in the target hierarchy is excluded, along with its subtree. If -IncludeFiles is also specified, the exclusions are applied to the files' names as well. .PARAMETER IncludeFiles By default, only directories are printed; use this switch to print files as well. .PARAMETER Ascii Uses ASCII characters to visualize the tree structure; by default, graphical characters from the OEM character set are used. .PARAMETER IndentCount Specifies how many characters to use to represent each level of the hierarchy. Defaults to 4. .PARAMETER Force Includes hidden items in the output; by default, they're ignored. .NOTES Directory symlinks are NOT followed, and a warning to that effect is issued. .EXAMPLE tree Prints the current directory's subdirectory hierarchy. .EXAMPLE tree ~/Projects -Ascii -Force -Exclude node_modules, .git Prints the specified directory's subdirectory hierarchy using ASCII characters for visualization, including hidden subdirectories, but excluding the subtrees of any directories named 'node_modules' or '.git'. #> [cmdletbinding(PositionalBinding=$false)] param( [parameter(Position=0)] [string] $Path = '.', [string[]] $Exclude, [ValidateRange(1, [int]::maxvalue)] [int] $IndentCount = 4, [switch] $Ascii, [switch] $Force, [switch] $IncludeFiles ) # Embedded recursive helper function for drawing the tree. function _tree_helper { param( [string] $literalPath, [string] $prefix ) # Get all subdirs. and, if requested, also files. $items = Get-ChildItem -Directory:(-not $IncludeFiles) -LiteralPath $LiteralPath -Force:$Force # Apply exclusion filter(s), if specified. if ($Exclude -and $items) { $items = $items.Where({ $name = $_.Name; -not $Exclude.Where({ $name -like $_ }, 'First') }) } if (-not $items) { return } # no subdirs. / files, we're done $i = 0 foreach ($item in $items) { $isLastSibling = ++$i -eq $items.Count # Print this dir. $prefix + $(if ($isLastSibling) { $chars.last } else { $chars.interior }) + $chars.hline * ($indentCount-1) + $item.Name # Recurse, if it's a subdir (rather than a file). if ($item.PSIsContainer) { if ($item.LinkType) { Write-Warning "Not following dir. symlink: $item"; continue } $subPrefix = $prefix + $(if ($isLastSibling) { $chars.space * $indentCount } else { $chars.vline + $chars.space * ($indentCount-1) }) _tree_helper $item.FullName $subPrefix } } } # function _tree_helper # Hashtable of characters used to draw the structure $ndx = [bool] $Ascii $chars = @{ interior = ('├', '+')[$ndx] last = ('└', '\')[$ndx] #' hline = ('─', '-')[$ndx] vline = ('│', '|')[$ndx] space = ' ' } # Resolve the path to a full path and verify its existence and expected type. $literalPath = (Resolve-Path $Path).Path if (-not $literalPath -or -not (Test-Path -PathType Container -LiteralPath $literalPath) -or $literalPath.count -gt 1) { throw "'$Path' must resolve to a single, existing directory."} # Print the target path. $literalPath # Invoke the helper function to draw the tree. _tree_helper $literalPath } ``` --- Note: - The third-party [`Get-PSTree`](https://github.com/santisq/PSTree) cmdlet (installable via `Import-Module PSTree -Scope CurrentUser`, for instance) offers a more fully featured implementation that notably includes reporting (cumulative) directory *sizes* (though no support for *excluding* directories as of this writing).
Marmalade UI builder I'm just starting out with my Marmalade career with 6.1. I'm aware they discontinued their UI builder in favor of 3rd party solutions. However, "Marmalade Ui Builder" in google brings up nothing in the first 3 pages (past 3 pages is always too far). Are there such 3rd party tools? Or are people actually using notepad and writing ui's by hand? I've seen an XML UI Builder that simply parses XML in to the .ui style. However, I don't see how that's any easier than typing everything out manually. It also is ambiguous as to whether IwUI is actually the proper way to go about it anymore. Some insight would be wonderful.
IwUI is old system replaced with IwNUI. N states for Native. IwNUI uses native controls for each platform. I have not used Iw\*UI for games it is more simpler to write interface in GL myself. Basic Screens/Buttons manager is coded in few hours. It is simpler to write such thing your self then learn that is has something you can't change. If you develop business apps then Marmalade added PhoneGap support. I believe PhoneGap to be a better way for business cross-platform apps. Especially because most mobile platforms share WebKit engine. What is more important Marmalade Team them selves do not believe in Iw\*UI and that is why they are forcing PhoneGap on Marmalade. In other words do not use Iw\*UI if do not know why you actually need it. There is 3rd party 2D game engine IwGame. I do not use it but it has its own Interface Builder that defines interface in XML.
Traversing a vector in reverse direction with size\_t values I want to traverse through the values of a vector in opposite direction. As you know the size of a vector is of size\_t. When I use the following code: ``` for(size_t r=m.size()-1; r >= 0; r--) { x[r] = f[r]; for(size_t c = r+1; c < m.size(); c++) { x[r] -= m[r][c] * x[c]; } } ``` I will go out of the range of the vector because the r will become 4294967295 after decrementing r = 0. I am not changing the r's type because in my project, I am treating warnings as errors, so it should be size\_t or I should cast it which is not interesting.
If you actually want to use `size_t` for indexing, the loop could be formulated as follows. ``` for(size_t r = m.size(); r > 0; r--) { x[r-1] = f[r-1]; for(size_t c = r; c < m.size(); c++) { x[r-1] -= m[r-1][c] * x[c]; } } ``` Basically you would iterate from `m.size()` to `1` and compensate by shifting inside the loop; but this solution might be a bit hard to follow. In [this question](https://stackoverflow.com/questions/4205720/iterating-over-a-vector-in-reverse-direction), a proposed solution is to use a `reverse_iterator`, which can be seen as a suitable abstraction of the index. The entire topic is coverd in more depth in [this question](https://stackoverflow.com/questions/409348/iteration-over-vector-in-c).
Hide address if already completed WooCommerce I'm looking for a way to hide the billing address on the checkout page of my woocommerce theme if the user has already filled up the billing form (from a previous order or if the user has done it previously from the "my account" page). I've found ways to hide the billing / shipping form completely on the checkout page if the user is logged in (see below), however I can't find a way to do the above. ``` add_filter( 'woocommerce_checkout_fields' , 'custom_override_checkout_fields' ); function custom_override_checkout_fields( $fields ) { if( is_user_logged_in() ){ unset($fields['billing']); $fields['billing'] = array(); } return $fields; } ``` Any idea? Thank you!
It will depend what you consider to be a fully completed address i made snippet function you can use to go further with. ``` add_filter( 'woocommerce_checkout_fields' , 'custom_override_checkout_fields' ); function custom_override_checkout_fields( $fields ) { if( is_user_logged_in() && !has_billing()){ unset($fields['billing']); $fields['billing'] = array(); } return $fields; } // Check the meta of Postcode and Country if they are entered. function has_billing($user_id = false){ if(!$user_id) $user_id = get_current_user_id(); $shipping_postcode = get_user_meta( $user_id, 'billing_postcode', true ); $shipping_country = get_user_meta( $user_id, 'billing_country', true ); // Fetch more meta for the condition has needed. if($shipping_postcode && $shipping_country){ return true; } return false; } ``` Note: the prefix **shipping\_** there is one for billing ( **billing\_** ). Edit: Here the meta key **billing\_address\_1** and **billing\_address\_2** always the prefix can be either **billing\_** or **shipping\_** Also if for some reason you don't have a shipping or billing address associated in the user meta keys but the customer once did an order you can check this code to fetch order address. > > [Woocommerce WC\_Order get\_shipping\_address() not returning as array](https://stackoverflow.com/questions/22327474/woocommerce-wc-order-get-shipping-address-not-returning-as-array) (old post might not be valid anymore) > > >
How to add a client using JDBC for ClientDetailsServiceConfigurer in Spring? I have the in memory thing working as follows: ``` @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.inMemory() .withClient("clientapp") .authorizedGrantTypes("password", "refresh_token") .authorities("USER") .scopes("read", "write") .resourceIds(RESOURCE_ID) .secret("123456"); } ``` I would like to use the JDBC implementation. For this, I have created the following tables (using MySQL): ``` -- Tables for OAuth token store CREATE TABLE oauth_client_details ( client_id VARCHAR(255) PRIMARY KEY, resource_ids VARCHAR(255), client_secret VARCHAR(255), scope VARCHAR(255), authorized_grant_types VARCHAR(255), web_server_redirect_uri VARCHAR(255), authorities VARCHAR(255), access_token_validity INTEGER, refresh_token_validity INTEGER, additional_information VARCHAR(4096), autoapprove TINYINT ); CREATE TABLE oauth_client_token ( token_id VARCHAR(255), token BLOB, authentication_id VARCHAR(255), user_name VARCHAR(255), client_id VARCHAR(255) ); CREATE TABLE oauth_access_token ( token_id VARCHAR(255), token BLOB, authentication_id VARCHAR(255), user_name VARCHAR(255), client_id VARCHAR(255), authentication BLOB, refresh_token VARCHAR(255) ); CREATE TABLE oauth_refresh_token ( token_id VARCHAR(255), token BLOB, authentication BLOB ); CREATE TABLE oauth_code ( code VARCHAR(255), authentication BLOB ); ``` Do I need to manually add a client in the MySQL tables? I tried this: ``` clients.jdbc(dataSource).withClient("clientapp") .authorizedGrantTypes("password", "refresh_token") .authorities("USER") .scopes("read", "write") .resourceIds(RESOURCE_ID) .secret("123456"); ``` Hoping that Spring would insert the correct things in the good tables, but it does not seem to do that. Why is it that you can further chain after `jdbc()` ?
This question is fairly old but none of the replies gave an answer to the questioner's original problem. I've stumbled over the same issue while getting myself familar with spring's oauth2 implementation and wondered why the `ClientDetailsServiceConfigurer` is not persisting the clients that were programmatically added via the `JdbcClientDetailsServiceBuilder` (which is instantiated by calling the `jdbc(datasource)` method on the configurer), despite that all tutorials on the net showed a similar example such as that posted by Wim. After digging deeper into the code i've noticed the reason. Well, it's simply because the code to update the `oauth_clients_details` table is never called. What's missing is the following call after configuring all clients: `.and().build()`. So, Wim's code must actually look as follows: ``` clients.jdbc(dataSource).withClient("clientapp") .authorizedGrantTypes("password", "refresh_token") .authorities("USER") .scopes("read", "write") .resourceIds(RESOURCE_ID) .secret("123456").and().build(); ``` Et voila, the client `clientapp` is now persisted into the database.
Python - how slice([1,2,3]) works and what does slice(None, [1, 3], None) represent? [Documentation for `class slice(start, stop[, step])`](https://docs.python.org/3/library/functions.html?highlight=slice%20function#slice): > > Return a slice object representing the set of indices specified by range(start, stop, step). > > > What is going in the code and why the slice class init even allows a list as its argument? ``` print(slice([1,3])) --- slice(None, [1, 3], None) print(slice(list((1,3)))) --- slice(None, [1, 3], None) # why stop is list? hoge = [1,2,3,4] _s = slice(list((1,3))) print(hoge[_s]) -------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-59-1b2df30e9bdf> in <module> 1 hoge = [1,2,3,4] 2 _s = slice(list((1,3))) ----> 3 print(hoge[_s]) TypeError: slice indices must be integers or None or have an __index__ method ``` --- # Updae Thanks to the answer from Selcuk. [sliceobject.c#L303-L322](https://github.com/python/cpython/blob/master/Objects/sliceobject.c#L303-L322) ``` static PyObject * slice_new(PyTypeObject *type, PyObject *args, PyObject *kw) { PyObject *start, *stop, *step; start = stop = step = NULL; if (!_PyArg_NoKeywords("slice", kw)) return NULL; if (!PyArg_UnpackTuple(args, "slice", 1, 3, &start, &stop, &step)) return NULL; /* This swapping of stop and start is to maintain similarity with range(). */ if (stop == NULL) { stop = start; // <----- start = NULL; } return PySlice_New(start, stop, step); // PySlice_New in L110 in the same file } ```
From [the documentation](https://docs.python.org/3/library/functions.html?highlight=slice%20function#slice): > > Slice objects have read-only data attributes `start`, `stop` and `step` which merely return the argument values (or their default). They have no other explicit functionality [...] > > > So they are simply dummy objects that keep whatever you pass them. You can even pass strings or other objects: ``` my_slice = slice("foo", "bar", "baz") ``` > > [...] however they are used by Numerical Python and other third party extensions. > > > It is the third party extension's job to validate if the `start`, `stop`, and `step` values make any sense. Also see the [CPython implementation](https://github.com/python/cpython/blob/master/Objects/sliceobject.c#L110). When you only pass one argument it is assumed to be the `stop` value. That's why you end up with `start` and `step` values set to `None`: > > `class slice(stop)` > > > `class slice(start, stop[, step])` > > >
How can variable field width be implemented with printf()? The question is : > > How can variable field width be implemented using `printf()`? That is, instead of `%8d`, the width should be specified at run time. > > > I came across some C code on the Internet based on the question above but as I am new to C programming I haven't been able to make heads or tails of the code. I am posting the code below: ``` #include <stdio.h> int main() { const char text[] = "Hello world"; int i; for ( i = 1; i < 12; ++i ) { printf("\"%.*s\"\n", i, text); } return 0; } ```
First of all, let me tell you, the code you have shown is about controlling the *precision*, not the field width. For a *shortened* form\*\* ``` %A.B<format specifier> ``` `A` denotes the field width and `B` makes the precision. Now, quoting the `C11` standard, chapter §7.21.6.1, `fprintf()` (*emphasis mine*) > > Each conversion specification is introduced by the character %. After the %, the following > appear in sequence: > > > [..] > > > - An optional precision that gives the minimum number of digits to appear for the `d`, `i`, > `o`, `u`, `x`, and `X` conversions, the number of digits to appear after the decimal-point > character for `a`, `A`, `e`, `E`, `f`, and `F` conversions, the maximum number of significant > digits for the `g` and `G` conversions, **or the maximum number of bytes to be written for `s` conversions. The precision takes the form of a period (`.`) followed either by an > asterisk `*` (described later) or by an optional decimal integer**; if only the period is > specified, the precision is taken as zero. If a precision appears with any other > conversion specifier, the behavior is undefined. > > > and > > As noted above, a field width, or precision, or both, may be indicated by an asterisk. In > this case, an `int` argument supplies the field width or precision. [...] > > > So, in your case, ``` printf("\"%.*s\"\n", i, text); ``` the precision will be supplied by `i` which can hold different values at run-time. --- The complete format (broken down in separate lines for ease of readability) ``` % <Zero or more flags> <optional minimum field width> <optional precision> <optional length modifier> <A conversion specifier character> ```
jQuery JavaScript Nested Asynchronous Functions callback I'm a little confused how to determine when async function called multiple times from another one is finished a call from the last iteration: ``` function MainAsyncFunction(callback) { for (var i = 0; i < 10; i++) { SubAsyncFunction(function(success) { if (i >= 10 && success) { // THIS IS WRONG?! callback(true); // happens too early } }); } }; function SubAsyncFunction(callback) { SubSubAsyncFunction(function() { callback(true); }); } ``` What I'm doing is calling the [Google Distance Matrix service](https://developers.google.com/maps/documentation/javascript/distancematrix#distance_matrix_requests), which has a limitation of 25 destinations, hence I'm having to split my array of destinations to call this service multiple times but I don't understand when it's finished. and in the main bit of code I can tell that the second iteration of the loop in the *MainAsyncFunction* hasn't yet completed when it does a call back. I think my problem is I haven't got my head around the order of events when dealing with Async functions in JavaScript... please explain how the subject is normally achieved.
You could use the jQuery [Deferred](http://api.jquery.com/category/deferred-object/) object, which acts as a token representing the status of an async operation. The following is a simplified example: ``` //set up your sub method so that it returns a Deferred object function doSomethingAsync() { var token = $.Deferred(); myAsyncMethodThatTakesACallback(function() { //resolve the token once the async operation is complete token.resolve(); }); return token.promise(); }; //then keep a record of the tokens from the main function function doSomethingAfterAllSubTasks() { var tokens = []; for (var i=0; i < 100; i++) { //store all the returned tokens tokens.push(doSomethingAsync()); } $.when.apply($,tokens) .then(function() { //once ALL the sub operations are completed, this callback will be invoked alert("all async calls completed"); }); }; ``` The following is an updated version of the OP's updated code: ``` function MainAsyncFunction(callback) { var subFunctionTokens = []; for (var i = 0; i < 10; i++) { subFunctionTokens.push(SubAsyncFunction()); } $.when.apply($,subFunctionTokens) .then(function() { callback(true); }); }; function SubAsyncFunction() { var token = $.Deferred(); SubSubAsyncFunction(function() { token.resolve(); }); return token.promise(); };​ ```
How does Mail PHP work? I stumbled on the following script today for sending an e-mail using PHPMail. ``` <?php $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = "[email protected]"; $headers = "From:" . $from; mail($to, $subject, $message, $headers); echo "Mail Sent."; ?> ``` Above can be runnable through `php mail.php` and instantly you'll get an e-mail sent to `$to` from `$from` despite not needing to set outgoing/ingoing servers out. It really intrigued me, since my CMS uses an SMTP outgoing server (well, same way Mail PHP does), which I need to set up with my Outlook SMTP username and password - some sort of verification. However, about Mail PHP just.. sends an e-mail. To the address you set it as. From the address you set it as. Looking at [PHP docs](http://php.net/manual/en/function.mail.php) it does not really reveal how it works. Does Mail PHP not have any issues with spamming since anyone can send anyone anything anytime programmatically without verification of the `from` identity? EDIT: It's rather funny the people in the comments were talking about the POTUS, since I had the exact thing in mind: ![enter image description here](https://i.stack.imgur.com/ajehP.png) It *did* land in my junk folder, but I'm sure it isn't hard to make this look convincing enough and still be considered `"oh damn spam filter lost my e-mail!"`
The `mail` function uses the settings from php.ini. The details of this configuration can be found in [Mail Runtime Configuration](http://php.net/manual/en/mail.configuration.php). The defaults can be set in php.ini, although you can override them using [`ini_set`](http://php.net/manual/en/function.ini-set.php). I bet you sent the mail from a PHP script on a hosted server. That server probably has SMTP settings configured beforehand. If you would try this locally on a WAMP/LAMP server, you would have to do this configuration yourself, since PHP cannot read your Outlook/WhateverMailclient settings. As stated in the comments, you can specify the sender/from address yourself. SMTP doesn't require this to be the actual sender domain, so that's why this works. The missing link is the pre-configured SMTP server of your host. Some relay servers do check for this, and your mail might be blocked or sent to a junk mail folder. You can however configure this in your DNS to indicate that `<Your server's IP>` is indeed allowed to send email for `<yourdomain>`. For more information about that subject, you might want to read [this question on ServerFault](https://serverfault.com/questions/24943/reverse-dns-how-to-correctly-configure-for-smtp-delivery).
How to remove SoapUI? I've installed SoapUI by downloading it from its website (<http://www.soapui.org/>), but I would now like to uninstall it. However, it installed using its own little installer so I don't know what/where it added. Is there an elegant way to remove it or to remove a software that installs like this in general?
``` rinzwind@discworld:~/SmartBear/soapUI-4.6.0$ ls bin jre LICENSE.txt soapUI-4.6.0.desktop UserGuide.url hermesJMS lib README.md soapUI.url wsi-test-tools JavaDoc.url licences readme.txt uninstal ``` See the file named `uninstall`? ;) ``` rinzwind@discworld:~/SmartBear/soapUI-4.6.0$ ./uninstall testing JVM in /home/rinzwind/SmartBear/soapUI-4.6.0/jre ... ``` ![enter image description here](https://i.stack.imgur.com/hPKsx.png)
How to concat 2 sliced array? I have: ``` var a = [1,2,3,4,5,6,7,8,9] ``` and I'm trying to do: ``` var b = []; b.concat(a.slice(0,3), a.slice(-3)) ``` And as a result I have: ``` b == [] ``` How I can get 3 first and 3 last elements from an array at `b`?
[`concat`](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/concat) doesn't work inline on the array. The result of `concat()` has to be catched. > > The `concat()` method returns a new array comprised of the array on which it is called joined with the array(s) and/or value(s) provided as arguments. > > > You're not updating the value of `b` array. ``` var a = [1, 2, 3, 4, 5, 6, 7, 8, 9]; var b = [].concat(a.slice(0, 3), a.slice(-3)); document.write(b); console.log(b); ``` You can also `concat` the sliced arrays. ``` var a = [1, 2, 3, 4, 5, 6, 7, 8, 9]; var b = a.slice(0, 3).concat(a.slice(-3)); document.write(b); console.log(b); ```
Combine/aggregate eclipse p2 repositories / extendable p2 repository With maven/tycho build for [Nodeclipse](http://www.nodeclipse.org/) Eclipse plugin there is new p2 repository every release. Release is done on [Bintray](https://bintray.com/nodeclipse/nodeclipse/nodeclipse-1/) that does not allow to update files. So every version goes in its folder. ``` BaseFolder BaseFolder/VersionFolder1 BaseFolder/VersionFolder2 BaseFolder/VersionFolder3 ``` Is it possible to have `BaseFolder` prepared once as extendable p2 repository, and `VersionFolderN` added later? So that there would be only one URL for updates and Eclipse platform could discover updates in the repository.
What you are looking for is a composite p2 repository. You'll just need the following two files in the base folder: - A `compositeContent.xml` with the following content: ``` <?xml version='1.0' encoding='UTF-8'?> <?compositeMetadataRepository version='1.0.0'?> <repository name='Project XYZ Releases Repository' type='org.eclipse.equinox.internal.p2.metadata.repository.CompositeMetadataRepository' version='1.0.0'> <properties size='1'> <property name='p2.atomic.composite.loading' value='true'/> </properties> <children size='3'> <child location='VersionFolder1'/> <child location='VersionFolder2'/> <child location='VersionFolder3'/> </children> </repository> ``` - A `compositeArtifacts.xml` with the following, similar content: ``` <?xml version='1.0' encoding='UTF-8'?> <?compositeArtifactRepository version='1.0.0'?> <repository name='Project XYZ Releases Repository' type='org.eclipse.equinox.internal.p2.artifact.repository.CompositeArtifactRepository' version='1.0.0'> <properties size='0'> </properties> <children size='3'> <child location='VersionFolder1'/> <child location='VersionFolder2'/> <child location='VersionFolder3'/> </children> </repository> ``` When a new version is released, just add the new folder as child in both files. The two files may also be compressed as ZIP and named `compositeContent.jar` and `compositeArtifacts.jar` to save network bandwidth. However this makes editing the files a little less practical. The Eclipse simultaneous release repositories also use this approach. E.g., at the time of writing this, the [Eclipse Luna repository](http://download.eclipse.org/releases/luna) contains only the original release and SR 1 (see [compositeContent](http://download.eclipse.org/releases/luna/compositeContent.jar), [compositeArtifacts](http://download.eclipse.org/releases/luna/compositeArtifacts.jar)). SR 2 will be added later, so that users will be able to get updates without having to configure a new repository URL.
Get the max value over the window in pyspark I am getting the maximum value over a specific window in pyspark. But what is returned from the method is not the expected. Here comes my codes: ``` test = spark.createDataFrame(DataFrame({'grp': ['a', 'a', 'b', 'b'], 'val': [2, 3, 3, 4]})) win = Window.partitionBy('grp').orderBy('val') test = test.withColumn('row_number', F.row_number().over(win)) test = test.withColumn('max_row_number', F.max('row_number').over(win)) display(test) ``` And the output is: [![enter image description here](https://i.stack.imgur.com/sbsB2.png)](https://i.stack.imgur.com/sbsB2.png) I expected it would return 2 for both group "a" and group "b" but it didn't. Anyone has ideas on this problem? Thanks a lot!
The problem here is with the frame for the `max` function. If you order the window as you are doing the frame is going to be `Window.unboundedPreceding, Window.currentRow`. So you can define another window where you drop the order (because the max function doesn't need it): ``` w2 = Window.partitionBy('grp') ``` You can see that in PySpark [docs](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Window): > > Note When ordering is not defined, an unbounded window frame (rowFrame, unboundedPreceding, unboundedFollowing) is used by default. When ordering is defined, a growing window frame (rangeFrame, unboundedPreceding, currentRow) is used by default. > > >
Open two links in different new tabs, not in the same I have two links in my page with `target="_blank"`, but if i open the first link in a new tab, when i click in the second, the page loads in tha tab that the first link are already open, i need to make they open in different tabs, not in the same. Thanks. Add the code... ``` <ul style="float: left;"> <li><a href='{external link}' target="_blank"><span>Help Desk</span></a></li> <li><a href='{external link}' target="_blank"><span>Bússola</span></a></li> </ul> ```
`target="_blank"` is the correct (and possibly the only) way to do this but how it behaves depends on the browser and browser settings. See [HTML: how to force links to open in a new tab, not new window](https://stackoverflow.com/questions/3970746/html-how-to-force-links-to-open-in-a-new-tab-not-new-window/18764547#18764547) for more details. A workaround might be to give them different names like so: ``` <ul style="float: left;"> <li><a href='{external link}' target="a"><span>Help Desk</span></a></li> <li><a href='{external link}' target="b"><span>Bússola</span></a></li> </ul> ``` That should force the browser to open two new tabs and if you click the first link, then always the same frame will reload (same for the second tab).
Solr Ping query caused exception: undefined field text Im trying to do some work on my server but running into problems. When I try to ping the server through the admin panel I get this error, which I believe might be causing the problem: > > The server encountered an internal error (Ping query caused exception: > undefined field text org.apache.solr.common.SolrException: Ping query > caused exception: undefined field text at > org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:76) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1376) at > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:365) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:260) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) > at > org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) > at > > > Can anyone give me a bit of guideance as to what might be going wrong? I'm using Solr 3.6. I think it may be to do with the defined "text" in the schema.xml?? This is my schema currently: <https://gist.github.com/3689621> Any help would be much appreciated. James
Based on the error, I am guessing that the query that is defined in the /admin/ping requestHandler is searching against a field named `text`, which you do not have defined in your schema. Here is a typical ping requestHandler section ``` <requestHandler name="/admin/ping" class="solr.PingRequestHandler"> <lst name="invariants"> <str name="q">solrpingquery</str> </lst> <lst name="defaults"> <str name="qt">standard</str> <str name="echoParams">all</str> <str name="df">text</str> </lst> </requestHandler> ``` Note how the `<str name="df">text<str>` setting. This is the default field that the ping will execute the search against. You should change this to a field that is defined in your schema, perhaps, `title` or `description` based on your schema.
Detect Windows Version With JavaScript Specifically, I am trying to detect Windows XP users as they are not compatible with my software. Is there a way to detect with at least 70% or higher accuracy?
Try `navigator.appVersion` <http://msdn.microsoft.com/en-us/library/ms533080(v=VS.85).aspx> <https://developer.mozilla.org/en/DOM/window.navigator.appVersion> I have Windows 7 here and the `navigator.appVersion` returns a string containing "NT 6.1" in these browsers: Chrome, Opera, Safari, IE9 beta. Only Firefox does not return that info in that string :( btw, WinXP is "NT 5.1", Vista is "NT 6.0" ... ***Update*** `navigator.userAgent` returns the "NT version" string in all 5 browsers. That means that `userAgent` is the property that is able to tell the Windows version.
C# method overload resolution issues in Visual Studio 2013 Having these three methods available in Rx.NET library ``` public static IObservable<TResult> Create<TResult>(Func<IObserver<TResult>, CancellationToken, Task> subscribeAsync) {...} public static IObservable<TResult> Create<TResult>(Func<IObserver<TResult>, CancellationToken, Task<IDisposable>> subscribeAsync) {...} public static IObservable<TResult> Create<TResult>(Func<IObserver<TResult>, CancellationToken, Task<Action>> subscribeAsync) {...} ``` I write the following sample code in **MSVS 2013**: ``` var sequence = Observable.Create<int>( async ( observer, token ) => { while ( true ) { token.ThrowIfCancellationRequested(); await Task.Delay( 100, token ); observer.OnNext( 0 ); } } ); ``` This does not compile due to ambiguous overloads. Exact output from the compiler being: ``` Error 1 The call is ambiguous between the following methods or properties: 'System.Reactive.Linq.Observable.Create<int>(System.Func<System.IObserver<int>,System.Threading.CancellationToken,System.Threading.Tasks.Task<System.Action>>)' and 'System.Reactive.Linq.Observable.Create<int>(System.Func<System.IObserver<int>,System.Threading.CancellationToken,System.Threading.Tasks.Task>)' ``` However as soon as I replace `while( true )` with `while( false )` or with `var condition = true; while( condition )...` ``` var sequence = Observable.Create<int>( async ( observer, token ) => { while ( false ) // It's the only difference { token.ThrowIfCancellationRequested(); await Task.Delay( 100, token ); observer.OnNext( 0 ); } } ); ``` the error disappears and method call resolves to this: ``` public static IObservable<TResult> Create<TResult>(Func<IObserver<TResult>, CancellationToken, Task> subscribeAsync) {...} ``` What is going on there?
This is a fun one :) There are multiple aspects to it. To start with, let's simplify it very significantly by removing Rx and actual overload resolution from the picture. Overload resolution is handled at the very end of the answer. **Anonymous function to delegate conversions, and reachability** The difference here is whether the end-point of the lambda expression is reachable. If it is, then that lambda expression doesn't return anything, and the lambda expression can only be converted to a `Func<Task>`. If the end-point of the lambda expression *isn't* reachable, then it can be converted to any `Func<Task<T>>`. The form of the `while` statement makes a difference because of this part of the C# specification. (This is from the ECMA C# 5 standard; other versions may have slightly different wording for the same concept.) > > The end point of a `while` statement is reachable if at least one of the following is true: > > > - The `while` statement contains a reachable break statement that exits the while statement. > - The `while` statement is reachable and the Boolean expression does not have the constant value `true`. > > > When you have a `while (true)` loop with no `break` statements, neither bullet is true, so the end point of the `while` statement (and therefore the lambda expression in your case) is not reachable. Here's a short but complete example without any Rx involved: ``` using System; using System.Threading.Tasks; public class Test { static void Main() { // Valid Func<Task> t1 = async () => { while(true); }; // Valid: end of lambda is unreachable, so it's fine to say // it'll return an int when it gets to that end point. Func<Task<int>> t2 = async () => { while(true); }; // Valid Func<Task> t3 = async () => { while(false); }; // Invalid Func<Task<int>> t4 = async () => { while(false); }; } } ``` We can simplify even further by removing async from the equation. If we have a synchronous parameterless lambda expression with no return statements, that's *always* convertible to `Action`, but it's *also* convertible to `Func<T>` for any `T` if the end of the lambda expression isn't reachable. Slight change to the above code: ``` using System; public class Test { static void Main() { // Valid Action t1 = () => { while(true); }; // Valid: end of lambda is unreachable, so it's fine to say // it'll return an int when it gets to that end point. Func<int> t2 = () => { while(true); }; // Valid Action t3 = () => { while(false); }; // Invalid Func<int> t4 = () => { while(false); }; } } ``` We can look at this in a slightly different way by removing delegates and lambda expressions from the mix. Consider these methods: ``` void Method1() { while (true); } // Valid: end point is unreachable int Method2() { while (true); } void Method3() { while (false); } // Invalid: end point is reachable int Method4() { while (false); } ``` Although the error method for `Method4` is "not all code paths return a value" the way this is detected is "the end of the method is reachable". Now imagine those method bodies are lambda expressions trying to satisfy a delegate with the same signature as the method signature, and we're back to the second example... **Fun with overload resolution** As Panagiotis Kanavos noted, the original error around overload resolution isn't reproducible in Visual Studio 2017. So what's going on? Again, we don't actually need Rx involved to test this. But we can see some *very* odd behavior. Consider this: ``` using System; using System.Threading.Tasks; class Program { static void Foo(Func<Task> func) => Console.WriteLine("Foo1"); static void Foo(Func<Task<int>> func) => Console.WriteLine("Foo2"); static void Bar(Action action) => Console.WriteLine("Bar1"); static void Bar(Func<int> action) => Console.WriteLine("Bar2"); static void Main(string[] args) { Foo(async () => { while (true); }); Bar(() => { while (true) ; }); } } ``` That issues a warning (no await operators) but it compiles with the C# 7 compiler. The output surprised me: ``` Foo1 Bar2 ``` So the resolution for `Foo` is determining that the conversion to `Func<Task>` is better than the conversion to `Func<Task<int>>`, whereas the resolution for `Bar` is determining that the conversion to `Func<int>` is better than the conversion to `Action`. All the conversions are valid - if you comment out the `Foo1` and `Bar2` methods, it still compiles, but gives output of `Foo2`, `Bar1`. With the C# 5 compiler, the `Foo` call is ambiguous by the `Bar` call resolves to `Bar2`, just like with the C# 7 compiler. With a bit more research, the synchronous form is specified in 12.6.4.4 of the ECMA C# 5 specification: > > C1 is a better conversion than C2 if at least one of the following holds: > > > - ... > - E is an anonymous function, T1 is either a delegate type D1 or an expression tree type Expression, T2 is either a delegate type D2 or an expression tree type Expression and one of the following holds: > - D1 is a better conversion target than D2 *(irrelevant for us)* > - D1 and D2 have identical parameter lists, and one of the following holds: > - D1 has a return type Y1, and D2 has a return type Y2, an inferred return type X exists for E in the context of that parameter list (§12.6.3.13), and the conversion from X to Y1 is better than the conversion from X to Y2 > - E is async, D1 has a return type `Task<Y1>`, and D2 has a return type `Task<Y2>`, an inferred return type `Task<X>` exists for E in the context of that parameter list (§12.6.3.13), and the conversion from X to Y1 is better than the conversion from X to Y2 > - **D1 has a return type Y, and D2 is void returning** > > > So that makes sense for the non-async case - and it also makes sense for how the C# 5 compiler isn't able to resolve the ambiguity, because those rules don't break the tie. We don't have a full C# 6 or C# 7 specification yet, but there's a [draft one available](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions). Its overload resolution rules are expressed somewhat differently, and the change may be there somewhere. If it's going to compile to anything though, I'd expect the `Foo` overload accepting a `Func<Task<int>>` to be chosen over the overload accepting `Func<Task>` - because it's a more specific type. (There's a reference conversion from `Func<Task<int>>` to `Func<Task>`, but not vice versa.) Note that the *inferred return type* of the lambda expression would just be `Func<Task>` in both the C# 5 and draft C# 6 specifications. Ultimately, overload resolution and type inference are **really hard** bits of the specification. This answer explains why the `while(true)` loop makes a difference (because without it, the overload accepting a func returning a `Task<T>` isn't even applicable) but I've reached the end of what I can work out about the choice the C# 7 compiler makes.
Foreign Key relationship between two columns in one table I'm building a database table representing a 'note' that has a parent-child relationship in two of it's columns, like so: ``` ID INT(10) PK NOT NULL UNSIGNED ZEROFILL AUTOINCREMENT -- Primary key parent_ID INT(10) UNSIGNED ZEROFILL -- References Primary key username VARCHAR(30) NOT NULL information VARCHAR(256) NOT NULL comment VARCHAR(256) NOT NULL ``` where `username` is a foreign key to a `user` table and is the username of the person who wrote the note, `information` is the information contained in the note and `comment` is what `username` added. `comment` will always have something in it, and is not necessarily the same as the previous user. The concept is that someone can "copy" a note and have their own comments, and say "I got this note from xxx", hence the parent-child relationship. A bit like sharing in facebook, perhaps. How do I correctly form the parent\_ID column? Should I use an identifying relationship; which sides are mandatory? I figure this has to be a 1-to-Many relationship since five people can copy the same note. I expect there to be more copying than creating new notes, so there will be a relatively small number of `NULL`'s in the records, but to get rid of nulls entirely should I apply a `NOT NULL` constraint and simply make the default parent `0` with an essentially meaningless record at `ID 0` and take note of this in the software? Is this even the correct way? Should I employ a two-table system with: ``` ID INT(10) PK information VARCHAR(256) orig_user VARHCAR(30) FK -- Potentially ``` and ``` ID INT(10) PK FK username VARCHAR(30) PK FK comment VARCHAR(256) ``` Which eliminates any possible `NULL`'s by definiton? Thanks
> > Should I use an identifying relationship > > > No, only `ID` should be in PK since it **alone** is unique. Also, a root (parent-less) note will have a NULL `parent_ID` and NULLs cannot be in PK. > > I figure this has to be a 1-to-Many relationship since five people can copy the same note. > > > Correct. It's also a tree, since there can be multiple levels of copying. > > ... should I apply a NOT NULL constraint and simply make the default parent 0 with an essentially meaningless record at ID 0 and take note of this in the software? > > > No need, FK would still be enforced on 0 and you'd need a "dummy" row just to satisfy it, as you already noted. FK ignores NULLs, so just put a NULL in the root note's `parent_ID`. > > Should I employ a two-table system... > > > This doesn't model the same thing. It just allows you to connect to the original user, but not original note. A *different* two-table design would be viable if a single note could have multiple parents, but that doesn't seem to be the case here. > > Which eliminates any possible NULL's by definiton? > > > You seem very intent on eliminating NULLs. Any reason for that? --- All in all, you should store notes in just one table, like this: ``` CREATE TABLE note ( ID INT(10) UNSIGNED AUTO_INCREMENT PRIMARY KEY, parent_ID INT(10) UNSIGNED, username VARCHAR(30) NOT NULL, information VARCHAR(256) NOT NULL, comment VARCHAR(256) NOT NULL, FOREIGN KEY (parent_ID) REFERENCES note (ID), FOREIGN KEY (username) REFERENCES `user` (username) ); ```
Remove HTML from string I am trying to clear the HTML coding from my RSS feed. I can not work out how to set the below to take out the HTML encoding. ``` var rssFeed = XElement.Parse(e.Result); var currentFeed = this.DataContext as app.ViewModels.FeedViewModel; var items = from item in rssFeed.Descendants("item") select new ATP_Tennis_App.ViewModels.FeedItemViewModel() { Title = item.Element("title").Value, DatePublished = DateTime.Parse(item.Element("pubDate").Value), Url = item.Element("link").Value, Description = item.Element("description").Value }; foreach (var item in items) currentFeed.Items.Add(item); ```
Just use the following code: ``` var withHtml = "<p>hello <b>there</b></p>"; var withoutHtml = Regex.Replace(withHtml, "<.+?>", string.Empty); ``` This will clean the html leaving only the text, so "hello there" So, you can just copy and use this function: ``` string RemoveHtmlTags(string html) { return Regex.Replace(html, "<.+?>", string.Empty); } ``` Your code will look something like this: ``` var rssFeed = XElement.Parse(e.Result); var currentFeed = this.DataContext as app.ViewModels.FeedViewModel; var items = from item in rssFeed.Descendants("item") select new ATP_Tennis_App.ViewModels.FeedItemViewModel() { Title = RemoveHtmlTags(item.Element("title").Value), DatePublished = DateTime.Parse(item.Element("pubDate").Value), Url = item.Element("link").Value, Description = RemoveHtml(item.Element("description").Value) }; ```
Full text search installed or not I have instaled SQL server 2008 R2 and when I run this SQL in SQL server management studio: ``` SELECT FULLTEXTSERVICEPROPERTY('IsFullTextInstalled') ``` I get 0 But If I run this: ``` SELECT * FROM sys.fulltext_catalogs ``` I get one row. I want to know If fulltext search is installed on my sql server or do I need to reinstall SQL server with advance options. Please suggest.
***My answer:*** If FULLTEXTSERVICEPROPERTY says it's not installed, then I would install from the original media. Run through the installer and simply add Full Text Search from the features page. FTS is [fully in the SQL Engine in 2008](http://msdn.microsoft.com/en-us/library/ms142541.aspx) so if it thinks it isn't installed, then ya best make it happy. ***My opinions/ponderings:*** Did you move a database from a previous SQL installation that had full text installed? That might explain the row in sys.fulltext\_catalogs. When you open a Database in SSMS, under the Storage Folder, Full Text Catalog folder do you have the option to add a New Catalog when you right click? In SQL Configuration Manager do you see the Full Text Daemon Launcher service? ![enter image description here](https://i.stack.imgur.com/nEour.jpg)
Conversion from cert file to pfx file Is it possible to convert a cert file to a pfx file? I tried importing my cerf file into IE, but it is never shown under the "personal" tab, thus I cannot export there. I am looking for if there is alternatives available. FYI, the cerf file is created by using "keytool" and then doing an export to a cert file.
This article describes two ways of creating a .pfx file from a .cer file: - [Maxime Lamure: Create your own .pfx file for ClickOnce](http://blogs.msdn.com/b/maximelamure/archive/2007/01/24/create-your-own-pfx-file-for-clickonce.aspx) > > Create your public & private Keys (You will be prompt to define the private key’s password): > > > `makecert.exe -sv MyKey.pvk -n "CN=.NET Ready!!!" MyKey.cer` > > > Create your PFX file from the public and private key > > > `pvk2pfx.exe -pvk MyKey.pvk -spc MyKey.cer -pfx MyPFX.pfx -po toto` > > > Programmaticaly you could do so in C# by writing the byte array directly to a file: ``` byte[] certificateData = certificate.Export(X509ContentType.Pfx, "YourPassword"); File.WriteAllBytes(@"C:\YourCert.pfx", certificateData); ``` And generally (if you're using IE 8) you might want to have a look at this answer on SO: - [How to make IE8 trust a self-signed certificate in 20 irritating steps](https://stackoverflow.com/questions/681695/what-do-i-need-to-do-to-get-ie8-to-accept-a-self-signed-certificate/1412118#1412118) Hope that helps you.
When should I use subroutine attributes? I don't grok Perl subroutine attributes at all. I have never seen them in actual code and [`perldoc perlsub`](http://perldoc.perl.org/perlsub.html#Subroutine-Attributes) and the [`perldoc attributes`](http://perldoc.perl.org/attributes.html) fail to answer my questions: - What are attributes useful for? - What do they bring to the table that is not already present in Perl best practices? - Are there any CPAN modules (well-known or otherwise) that make use of attributes? It would be great if someone could put together a detailed example of attributes being used the way they should be. --- For those who are as clueless as me, attributes are the parameters after the colon in the **`attributes SYNOPSIS`** examples below: ``` sub foo : method ; my ($x,@y,%z) : Bent = 1; my $s = sub : method { ... }; use attributes (); # optional, to get subroutine declarations my @attrlist = attributes::get(\&foo); use attributes 'get'; # import the attributes::get subroutine my @attrlist = get \&foo; ```
Attributes allow you annotate variables to perform auto-magic behind the scenes. A similar concept is [java annotations](http://docs.oracle.com/javase/6/docs/technotes/guides/language/annotations.html). Here is a small example that might help. It uses `Attribute::Handlers` to create the `loud` attributes. ``` use Attribute::Handlers; sub UNIVERSAL::loud : ATTR(CODE) { my ( $pkg, $sym, $code ) = @_; no warnings 'redefine'; *{$sym} = sub { return uc $code->(@_); }; } sub foo : loud { return "this is $_[0]"; } say foo("a spoon"); say foo("a fork"); ``` Whenever a sub is declared with the `loud` attribute the `UNIVERSAL::loud` callback triggers exposing meta-information on the sub. I redefined the function to actually call an anonymous sub, which in turn calls the original sub and passes it to `uc` This outputs: ``` THIS IS A SPOON THIS IS A FORK ``` Now let's looks a the variable example from the [SYNOPSIS](http://perldoc.perl.org/attributes.html#SYNOPSIS): ``` my ($x,@y,%z) : Bent = 1; ``` Breaking this down into small perl statement without taking into account attributes we have ``` my $x : Bent $x = 1; my @y : Bent @y = 1; my %Z : Bent %z = 1; ``` We can now see that each variable has been attributed the Bent annotation in a concise way, while also assigning all variables the value 1. Here is a perhaps more interesting example: ``` use Attribute::Handlers; use Tie::Toggle; sub UNIVERSAL::Toggle : ATTR(SCALAR) { my ($package, $symbol, $referent, $attr, $data, $phase) = @_; my @data = ref $data eq 'ARRAY' ? @$data : $data; tie $$referent, 'Tie::Toggle', @data; } my $x : Toggle; say "x is ", $x; say "x is ", $x; say "x is ", $x; ``` Which outputs: ``` x is x is 1 x is ``` You can use this to do logging, create test annotations, add type details to variables, syntactic sugar, do moose-ish role composition and many other cool things. Also see this question: [How do Perl method attributes work?](https://stackoverflow.com/questions/987059/how-do-perl-method-attributes-work).
properly rendering an SDL rectangle to the screen Here is my code, I am following lazyfoos tutorials on SDL, here is the exact tutorial I am following - <http://lazyfoo.net/tutorials/SDL/08_geometry_rendering/index.php> > > Notice the call to SDL\_SetRenderDrawColor. We're using 255 red and 255 > green which combine together to make yellow. Remember that call to > SDL\_SetRenderDrawColor at the top of the loop? If that wasn't there, > the screen would be cleared with whatever color was last set with > SDL\_SetRenderDrawColor, resulting in a yellow background in this case. > > > Lazyfoo does explain it above but it still doesn't make sense to me. To draw a filled rectangle to the screen is a pretty trivial task but it's a task that sometimes causes a lot of confusion,for example you need to call SDL\_SetRenderDrawColor() not once but twice,once before you clear the renderer,and also once before you call SDL\_RenderFillRect(). Why do you need to call SDL\_SetRenderDrawColor() twice and why in that order? I noticed if I comment out the first SDL\_SetRenderDrawColor() just before the call to SDL\_RenderFillRect(),the full window will be the colour you set the rectangle to be,but when you include the two calls to SDL\_SetRenderDrawColor() in the order I specified the window shows a coloured rectangle in the centre of the screen with the rest of the screen being white(first SDL\_SetRenderDrawColor() call). Here is my game loop where the calls are made. ``` while( !quit ) { while( SDL_PollEvent( &event ) != 0 ) { if( event.type == SDL_QUIT ) { quit = true; } } SDL_SetRenderDrawColor( renderer, 255, 255, 255, 0 ); // line of code in question SDL_RenderClear( renderer ); SDL_Rect fillRect = { 500 / 4, 500 / 4, 500 / 2, 500 / 2 }; SDL_SetRenderDrawColor( renderer, 0x00, 0xFF, 0x00, 0xFF ); // 2nd line of code in question SDL_RenderFillRect( renderer, &fillRect ); SDL_RenderPresent( renderer ); } ```
> > Why do you need to call SDL\_SetRenderDrawColor() twice and why in that > order? > > > The name of `SDL_RenderClear` is a bit misleading. It doesn't clear the screen to "empty" or anything - it just fills it with whatever color was set by `SDL_SetRenderDrawColor`. So if you don't change the color between "clearing" and drawing the rectangle, then you won't see the rectangle because you're drawing it with the same color that you just filled the entire screen with. So here ``` SDL_SetRenderDrawColor( gRenderer, 0xFF, 0xFF, 0xFF, 0xFF ); SDL_RenderClear( gRenderer ); ``` You make the whole screen white. We do this by setting white, and then painting over the entire screen with white. Then here ``` SDL_SetRenderDrawColor( gRenderer, 0xFF, 0x00, 0x00, 0xFF ); ``` We set red so the rectangle here ``` SDL_RenderFillRect( gRenderer, &fillRect ); ``` Will be red (not white). And if I remember the tutorial correctly, it also draws a line and some other things, every time calling `SDL_SetRenderDrawColor` right before to set the correct color. > > I noticed if I comment out the first SDL\_SetRenderDrawColor() just > before the call to SDL\_RenderFillRect(),the full window will be the > colour you set the rectangle to be > > > Very good observation! You see, since you're looping (`while(!quit){`) you do `SDL_RenderClear` and then `SDL_RenderFillRect`... but then `SDL_RenderClear` comes again, and so on. So when `SDL_RenderClear` happens, the color was actually set from right before `SDL_RenderFillRect` in the *last* run through the loop. Hence why it has that color, too. So actually, I don't know what the color is at the very first time because it's not set yet (might be a default value of white, or something), but we probably can't see it because that's just on the first run through the loop anyway. So what happens roughly is: ``` ... SDL_RenderClear(renderer); SDL_SetRenderDrawColor( renderer, 0x00, 0xFF, 0x00, 0xFF ); SDL_RenderFillRect( renderer, &fillRect ); SDL_RenderClear(renderer); SDL_SetRenderDrawColor( renderer, 0x00, 0xFF, 0x00, 0xFF ); SDL_RenderFillRect( renderer, &fillRect ); SDL_RenderClear(renderer); SDL_SetRenderDrawColor( renderer, 0x00, 0xFF, 0x00, 0xFF ); SDL_RenderFillRect( renderer, &fillRect ); ... ``` So you see, with only that second `SDL_SetRenderDrawColor` call, both `SDL_RenderClear` and `SDL_RenderFillRect` will always draw in green, except the very first `SDL_RenderClear` call in the first frame.
Understanding 'most vexing parse' - why allow ambiguous syntax? While trying to understand the "Most vexing parse" problem in C/C++, this question immediately springs to mind - why have a syntax that causes this problem to begin with? For example, ``` class Timer { public: Timer(); }; class TimeKeeper { public: TimeKeeper(const Timer& t); int get_time() { return 1; } }; int main() { TimeKeeper time_keeper(Timer()); // the above is eq to this: TimeKeeper time_keeper(Timer (*)()); } ``` So why not simply disallow `TimeKeeper time_keeper(Timer())` to be a function declaration that takes an unnamed function ptr returning type Timer? Is `TimeKeeper time_keeper(Timer (*)())` inferior as a function declarator? Is it not due to this syntax that we even get this ambiguity or am I missing something? EDIT: Personally, I've never used `TimeKeeper time_keeper(Timer())` as a function declaration. I've always used the `Timer (*)()` to specify a function pointer as I find it clearer.
> > So why not simply disallow TimeKeeper time\_keeper(Timer()) to be a function declaration that takes an unnamed function ptr returning type Timer? > > > Suppose for a while that this function declaration is diallowed, because it uses *unnamed* parameter. If that is so, then the following *declarations* will be disallowed as well: ``` int max(int,int); //error (in hypothetical C++) int min(int,int); //error (in hypothetical C++) ``` And *then* the programmers will be *forced* to write the parameter *name*, in the declarations as well: ``` int max(int a,int b); //ok int min(int a,int b); //ok ``` But then someone else would stand up and ask : *"Why am I forced to write the parameter name(s) in the declarations when it doesn't use it? Why is it not optional?*" I think this guy is rational and what he asked has point. It is indeed irrational to *force* programmers to name the parameter in the declarations. -- Reading your comment, it seems that you think the following declarations are *exactly* same: ``` int max(Timer()); int max(Timer(*)()); ``` No. They're not *exactly* same from the *syntax* point of view, though they are exactly same from the *behavior* point of view. The subtle difference is that in the former, the parameter type is a function which takes nothing, and returns `Timer`, while in the later, the parameter type is a *pointer* to a function which takes nothing, and returns `Timer`. Do you see the difference? But then the question is, why are they same behavior-wise? Well the answer is, in the former declaration, the parameter type is *adjusted* and then *becomes* a pointer type, and so it behaves the same as the second declaration. The C++03 Standard says in §13.1/3, > > Parameter declarations that differ only in that one is a *function type* > and the other is *a pointer to the same function type* are equivalent. > **That is, the function type is adjusted to become a pointer to function > type** (8.3.5). > > > I hope it is same in C++11 also. -- Your doubt (taken from the comment): > > Still not any closer to understanding why we need the 2 syntax? > > > Because they are two *different* types. Function type, and pointer to function type. Only as parameter type, they behaves same. Otherwise, they're different. See my answer here to see where they behave differently: - [Reference to Function syntax - with and without &](https://stackoverflow.com/questions/7321993/reference-to-function-syntax-with-and-without) And since they behave differently in other situations, we *have* them, we *need* them. The Standard doesn't (and should not) disallow one syntax, just because as parameter type they behave same.
Difference between Object() and Object{} C# will allow to create an object using either Object() or Object{}. What is the difference with Object() and Object{} ``` public item getitem() { return new item() } public item getitem() { return new item {} } ```
This syntax: ``` new SomeType{} ``` is an [*object initializer expression*](http://msdn.microsoft.com/en-us/library/bb384062.aspx) which happens to set no properties. It calls the parameterless constructor implicitly. You can add property assignments within the braces: ``` new SomeType { Name = "Jon" } ``` This syntax: ``` new SomeType() ``` is *just* a call to the parameterless constructor, with no opportunities to set properties. Note that you can *explicitly* call a constructor (paramterized or not) with an object initializer too: ``` // Explicit parameterless constructor call new SomeType() { Name = "Jon" } // Call to constructor with parameters new SomeType("Jon") { Age = 36 } ``` See section 7.6.10.2 of the C# 4 specification for more details about object initializers. I would highly recommend that if you *aren't* setting any properties, you just use `new SomeType()` for clarity. It's odd to use an object initializer without setting any properties.
How do I use Feature Tags from Azure App Config in Blazor? I am attempting to use feature flags from Azure App Config in my Blazor application. I have been following this [documentation](https://learn.microsoft.com/en-us/azure/azure-app-configuration/quickstart-feature-flag-aspnet-core?tabs=core3x) from Microsoft to add them. I am successfully able to use them by injecting the FeatureManager into my code sections and utililzing the feature tag in .cshtml files like so, ``` <feature name="Beta"> <li class="nav-item"> <a class="nav-link text-dark" asp-area="" asp-controller="Beta" asp-action="Index">Beta</a> </li> </feature> ``` However, I have not been able to successfully use the feature tag in my .razor files. This comes as a result of not being able to add this directive to my files. ``` @addTagHelper *, Microsoft.FeatureManagement.AspNetCore ``` Is there a way to use the feature tag in my Razor files?
AFAIK Blazor does not yet have a similar component/tag-helper. But not to worry - it's quite easy to check the flags manually. ``` @using Microsoft.FeatureManagement @inject IFeatureManager FeatureManager @if (betaFeatureIsEnabled) { <h1>My Beta Feature</h1> } @code { private bool betaFeatureIsEnabled = false; protected override async Task OnInitializedAsync() { betaFeatureIsEnabled = await FeatureManager.IsEnabledAsync("Beta"); } } ``` From this we can roll our own little helper component to make this look quite neat. ### FeatureFlagView.razor (helper component) ``` @using Microsoft.FeatureManagement @inject IFeatureManager FeatureManager @if (featureIsEnabled) { @ChildContent } @code { private bool featureIsEnabled = false; [Parameter] public RenderFragment ChildContent { get; set; } [Parameter] public string FlagName { get; set; } protected override async Task OnInitializedAsync() { if (string.IsNullOrEmpty(FlagName)) return; featureIsEnabled = await FeatureManager.IsEnabledAsync(FlagName); } } ``` ### Using the FeatureFlagView inside a .razor page/component ``` <FeatureFlagView FlagName="Beta"> <h1>My Beta Feature</h1> </FeatureFlagView> ``` This could then be extended quite a bit to handle more complex scenarios like requiring multiple flags, not flags etc and more importantly using Enum's (instead of strings).
Blazor: Reduce/Elimination of animation speed of a chart is it possible to reduce or even turn off the animation speed at the usage of the ChartJs.Blazor's BarChart component in Blazor? I have found this NuGet package to be very useful, but I don't see how would it be possible to turn off the animation whenever I update my BarChart. For the easier overlook, here is a simplified version that I am testing on now: ``` @using ChartJs.Blazor.ChartJS.BarChart @using ChartJs.Blazor.ChartJS.BarChart.Axes @using ChartJs.Blazor.ChartJS.Common.Axes @using ChartJs.Blazor.ChartJS.Common.Axes.Ticks @using ChartJs.Blazor.ChartJS.Common.Properties @using ChartJs.Blazor.ChartJS.Common.Wrappers @using ChartJs.Blazor.Charts @using ChartJs.Blazor.Util @using BootstrapChart1.Data <h2>Simple Bar Chart</h2> <div class="row"> <button class="btn btn-primary" @onclick="AddData"> Add Data </button> <button class="btn btn-primary" @onclick="RemoveData"> Remove Data </button> </div> <ChartJsBarChart @ref="_barChart" Config="@_barChartConfig" Width="600" Height="300" /> @code { private BarConfig _barChartConfig; private ChartJsBarChart _barChart; private BarDataset<DoubleWrapper> _barDataSet; protected override void OnInitialized() { _barChartConfig = new BarConfig { Options = new BarOptions { Title = new OptionsTitle { Display = true, Text = "Simple Bar Chart" }, Scales = new BarScales { XAxes = new List<CartesianAxis> { new BarCategoryAxis { BarPercentage = 0.5, BarThickness = BarThickness.Flex } }, YAxes = new List<CartesianAxis> { new BarLinearCartesianAxis { Ticks = new LinearCartesianTicks { BeginAtZero = true } } } }, ResponsiveAnimationDuration = 0, } }; _barChartConfig.Data.Labels.AddRange(new[] {"A", "B", "C", "D"}); _barDataSet = new BarDataset<DoubleWrapper> { Label = "My double dataset", BackgroundColor = new[] {ColorUtil.RandomColorString(), ColorUtil.RandomColorString(), ColorUtil.RandomColorString(), ColorUtil.RandomColorString()}, BorderWidth = 0, HoverBackgroundColor = ColorUtil.RandomColorString(), HoverBorderColor = ColorUtil.RandomColorString(), HoverBorderWidth = 1, BorderColor = "#ffffff" }; _barDataSet.AddRange(new double[] {8, 5, 3, 7}.Wrap()); _barChartConfig.Data.Datasets.Add(_barDataSet); } private void AddData() { var nowSecond = DateTime.Now.Second; _barChartConfig.Data.Labels.Add(nowSecond.ToString()); _barDataSet.Add(new DoubleWrapper(nowSecond)); _barChart.Update(); } } ``` Source of the extension: <https://github.com/mariusmuntean/ChartJs.Blazor>
I'm one of the co-authors of the ChartJs.Blazor library. In order to disable the animations, you have to set the animation duration to 0 wherever possible. This is documented in the chart.js-docs [performance-section](https://www.chartjs.org/docs/latest/general/performance.html#disable-animations). You can already set the `BarOptions.ResponsiveAnimationDuration` and the `BarOptions.Hover.AnimationDuration` to `0` but currently we're missing the `BarOptions.Animation` option. There is a pull request open but not merged/released yet. I will fix this before the next nuget-release. If you need it right now, you can subclass the `BarOptions` class and add the `Animation` property (of type `Animation`) yourself. Then use your subclass instead of the original `BarOptions` and you can set `YourBarOptions.Animation.Duration` to `0` as well. If you don't manage to do that, write a comment; I can included the code you need but please try it yourself first :) ## Update I have fixed it and released a new version. [Release-1.1.0](https://github.com/mariusmuntean/ChartJs.Blazor/releases/tag/v1.1.0) is published and available [on nuget](https://www.nuget.org/packages/ChartJs.Blazor/).
C++, how to tokenize this string? How can I get string like "Ac milan" and "Real Madryt" if they are separated with whitespace? Here is my attempt: ``` string linia = "Ac milan ; Real Madryt ; 0 ; 2"; str = new char [linia.size()+1]; strcpy(str, linia.c_str()); sscanf(str, "%s ; %s ; %d ; %d", a, b, &c, &d); ``` but it doesn't work; I have: `a= Ac;` `b = (null); c=0; d=2;`
Yes, sscanf *can* do what you're asking for, using a scanset conversion: ``` #include <stdio.h> #include <iostream> #include <string> int main(){ char a[20], b[20]; int c=0, d=0; std::string linia("Ac milan ; Real Madryt ; 0 ; 2"); sscanf(linia.c_str(), " %19[^;]; %19[^;] ;%d ;%d", a, b, &c, &d); std::cout << a << "\n" << b << "\n" << c << "\n" << d << "\n"; return 0; } ``` The output produced by this is: ``` Ac milan Real Madryt 0 2 ```
jsf message severity How do I able to fetch all the messages with SEVERITY is ERROR only. I tried: ``` Iterator<FacesMessage> messages = facesContext.getMessages(clientId); while (messages.hasNext()){ if(messages.next().getSeverity().toString()=="ERROR 2")System.out.println(messages); } ``` Is this th right way? It doesnot intercept messages with ERROR severity. Any help would be highly appreciated.
The comparison is wrong. You cannot (reliably) compare Strings on its content with `==`. When comparing objects with `==`, it would only return `true` if they are of the same **reference**, not *value* as you seem to expect. Objects needs to be compared with `Object#equals()`. But you can compare **constants** with `==`. The [`FacesMessage.Severity`](http://java.sun.com/javaee/5/docs/api/javax/faces/application/FacesMessage.html#field_summary) values are all static constants. You should rather just compare `Severity` with `Severity`. Also the sysout is wrong, it is printing the iterator instead of the sole message. This should work: ``` Iterator<FacesMessage> messages = facesContext.getMessages(clientId); while (messages.hasNext()) { FacesMessage message = messages.next(); if (message.getSeverity() == FacesMessage.SEVERITY_ERROR) { System.out.println("Error: " + message); } } ```
Why typename keyword is not needed in template dependent nested type names in VS2015? I was reading about the usage of `typename` in C++ template programming (e.g. [this Q/A](https://stackoverflow.com/questions/610245/where-and-why-do-i-have-to-put-the-template-and-typename-keywords)). To me, it seems that when using a **dependent nested type name**, we should use `typename` for avoiding parsing ambiguity. I also checked this on Scot Meyers book [effective C++](https://rads.stackoverflow.com/amzn/click/com/0321334876), item #42. But what is strange for me is that the same example in the book, works without the `typename`. Here is the code: ``` template<class C> void Print2nd(const C & cont) { if (cont.size() >= 2) { C::const_iterator * iter1 = new C::const_iterator(cont.begin()); // why typename is NOT needed? C::const_iterator iter2 = cont.begin(); // why typename is NOT needed? (*iter1)++; iter2++; int value1 = **iter1; int value2 = *iter2; std::cout << "The value of 2nd with pointer is: " << value1 << std::endl; std::cout << "The value of 2nd without pointer is: " << value2 << std::endl; } } int main() { std::vector<int> vect = {1,2,3,4,5,6}; Print2nd(vect); return 0; } ``` I am using VS2015. So, the Q is that why `typename` is not needed in this context? Is there any upgrade in recent C++ compilers to avoid using `typename` in such a context? Or I am doing a mistake in the code? **Update 1:** Thanks to @FrançoisAndrieux comment, it seems that the same thing is happening in VS2008 and VS2010, as reported in [this Q/A](https://stackoverflow.com/questions/5683257/why-is-typename-not-needed-here-in-visual-studio-2008-2010).
In [c++20](/questions/tagged/c%2b%2b20 "show questions tagged 'c++20'") `typename` is not needed there. In some contexts, the need for `typename` was removed, because syntactically anything there *must* be a type. In particular: > > A qualified name that appears in type-id, where the smallest enclosing type-id is: > > > - the type in a new expression that does not parenthesize its type; > > > [Quoted source](https://en.cppreference.com/w/cpp/language/dependent_name#The_typename_disambiguator_for_dependent_names) isn't directly from the standard, but pretty reliable. Prior to [c++20](/questions/tagged/c%2b%2b20 "show questions tagged 'c++20'") `typename` was needed there; it would be parsed as a value, and `new value` is not valid syntax. In [c++20](/questions/tagged/c%2b%2b20 "show questions tagged 'c++20'") `typename` is *optional* in that context. Now, [visual-studio-2015](/questions/tagged/visual-studio-2015 "show questions tagged 'visual-studio-2015'") has no [c++20](/questions/tagged/c%2b%2b20 "show questions tagged 'c++20'") features in it; what you are seeing there is MSVC's failure to properly implement [c++11](/questions/tagged/c%2b%2b11 "show questions tagged 'c++11'")/[c++14](/questions/tagged/c%2b%2b14 "show questions tagged 'c++14'")/[c++17](/questions/tagged/c%2b%2b17 "show questions tagged 'c++17'"), not a [c++20](/questions/tagged/c%2b%2b20 "show questions tagged 'c++20'") extension.
Running multiple commands in cmd via psexec I'm working on creating a single command that will run mulitple things on the command line of another machine. Here is what I'm looking to do. - Use psexec to access remote machine - travel to proper directory and file - execute ant task - exit cmd - run together in one line I can run the below command from Run to complete what I need accomplished but can't seem to get the format correct for psexec to understand it. ``` cmd /K cd /d D:\directory & ant & exit ``` I've tried appling this to the psexec example below: ``` psexec \\machine cmd /K cd /d D:\directory & ant & exit ``` When executing this it will activate the command line and travel to `D:\directory` but won't execute the remaining commands. Adding `""` just creates more issues. Can anyone guide me to the correct format? Or something other than psexec I can use to complete this (free options only)?
Figured it out finally after some more internet searching and trial and error. psexec needs /c to run multiple commands, but that syntax doesn't work with the setup I wrote above. I've gotten the below command to run what I need. ``` psexec \\machine cmd /c (^d:^ ^& cd directory^ ^& ant^) ``` I don't need to exit because psexec will exit itself upon completion. You can also use && to require success to continue on to the next command. Found this forum helpful [http://forum.sysinternals.com/psexec\_topic318.html](https://web.archive.org/web/20161018222251/http://forum.sysinternals.com/psexec_topic318.html) And this for running psexec commands <http://ss64.com/nt/psexec.html>
Check if css property has !important attribute applied If I have a style like this - ``` ​div#testdiv {position:absolute;top:10px !important;}​ ``` I can query the `top` value with jQuery like this - ``` $("#testdiv").css("top"); ``` which will return the value `10px`. Is it possible to use jQuery or JavaScript to check if the `top` property has had the `!important` attribute applied to it?
First of all, such a solution does not seem to exist in jQuery. Many available javascript solutions offered, use the function `getPropertyPriority()`. First, this function is not supported by IE6-IE8 ( see [here](http://www.quirksmode.org/dom/w3c_css.html#misc) and [here](http://msdn.microsoft.com/en-us/library/ff974377%28v=vs.85%29.aspx)). Second, this function does not directly work on elements if their style is not declared *inline*. So, we would be able to get the important property in the following case: ``` <div id="testdiv" style="top : 10px !important;">Some div</div> <script type="text/javascript"> // should show 'important' in the console. console.log(document.getElementById("testdiv").style.getPropertyPriority('top')); </script> ``` However if we could declare the style of `#testdiv` in a css stylesheet, we will get an empty string. Also the `CSSStyleDeclaration` interface is not available in IE6-8. Ofcourse this is pretty useless this way. We need a different approach. I've put this approach into a [JSFiddle](http://jsfiddle.net/PCM3Z/). We can read the !important property directly from the css stylesheets, which are contained in the array `document.styleSheets[]`. (Opera 8 and below do not support this array). At [Quirksmode](http://www.quirksmode.org/dom/w3c_css.html#access) you can see the methods which methods are supported to access the stylesheets. Based on this information we can do the following: - For IE6-8, we use the `styleSheets[].imports` to access the imported stylesheets (and keep doing this recursively till we do not find any import statements anymore) and then `styleSheets[].rules` basically for each stylesheet add the css rules to an array. - For other browsers, we use `styleSheets[].cssRules` to access both the imported and css rules. We detect the import rules by checking if it implements the *CSSImportRule* interface and use these to access the css rules in the imported stylesheets recursively. In both cases we add the css rules to an array only if the rules matches the HTMLElement (in your case `#testdiv`). This results in an array of css rules that are matched to a HTMLElement. This is basically what the `getMatchedCSSRules()` function in webkit browsers does. However, we write it ourselves here. Based on this information we write our `hasImportant(htmlNode, property)` function, where htmlNode is an HTMLElement (your testdiv) and property the css property ('top' in your case). First, we check if the inline style of the top property has an important attribute. This saves us looking through the stylesheets if it does contain this attribute. We write a new function `isImportant(node, property)` which uses our good old function `node.style.getPropertyPriority(property)`. However, like I mentioned earlier in this answer: this function is not supported in IE6-IE8. We can write the function ourselves: in IE the property `node.style.cssText` contains the declaration block text. We search for the property ('top') in this block of text and check if its value contains '!important'. We can reuse this function on every css rule obtained using the `getMatchedCSSRules` function, by looping through all css rules that match with the htmlNode and calling the isImportant function. All of the above can be found in the code below. This is the basic approach and probably should be fine-tuned further: - some code might be replaced with jQuery - some code might be simplified - css rules implementing the CSSMediaRule interface and [other interfaces](http://www.w3.org/TR/DOM-Level-2-Style/css.html#CSS-CSSStyleSheet) might cause some problems for this code and an error check should be performed - there might be simpler approach, but I am not aware of any other method to get this working cross browser. ``` var debug = true; /** * Get the css rules of a stylesheet which apply to the htmlNode. Meaning its class * its id and its tag. * @param CSSStyleSheet styleSheet * @param HTMLElement htmlNode */ function getCssRules(styleSheet, htmlNode) { if ( !styleSheet ) return null; var cssRules = new Array(); if (styleSheet.cssRules) { var currentCssRules = styleSheet.cssRules; // Import statement are always at the top of the css file. for ( var i = 0; i < currentCssRules.length; i++ ) { // cssRules all contains the import statements. // check if the rule is an import rule. if ( isImportRule(currentCssRules[i]) ) { // import the rules from the imported css file. var importCssRules = getCssRules(currentCssRules[i].styleSheet, htmlNode); if ( importCssRules != null ) { // Add the rules from the import css file to the list of css rules. cssRules = addToArray(cssRules, importCssRules, htmlNode); } // Remove the import css rule from the css rules. styleSheet.deleteRule(i); } else { // We found a rule that is not an CSSImportRule break; } } // After adding the import rules (lower priority than those in the current stylesheet), // add the rules in the current stylesheet. cssRules = addToArray(cssRules, currentCssRules, htmlNode); } else if (styleSheet.rules) { // IE6-8 // rules do not contain the import statements. var currentCssRules = styleSheet.rules; // Handle the imports in a styleSheet file. if ( styleSheet.imports ) { // IE6-8 use a seperate array which contains the imported css files. var imports = styleSheet.imports; for ( var i = 0; i < imports.length; i++ ) { var importCssRules = getCssRules(imports[i], htmlNode); if ( importCssRules != null ) { // Add the rules from the import css file to the list of css rules. cssRules = addToArray(cssRules, importCssRules, htmlNode); } } } // After adding the import rules (lower priority than those in the current stylesheet), // add the rules in the current stylesheet. cssRules = addToArray(cssRules, currentCssRules, htmlNode); } return cssRules; } /** * Since a list of rules is returned, we cannot use concat. * Just use old good push.... * @param CSSRuleList cssRules * @param CSSRuleList cssRules * @param HTMLElement htmlNode */ function addToArray(cssRules, newRules, htmlNode) { for ( var i = 0; i < newRules.length; i++ ) { if ( htmlNode != undefined && htmlNode != null && isMatchCssRule(htmlNode, newRules[i]) ) cssRules.push(newRules[i]); } return cssRules; } /** * Matches a htmlNode to a cssRule. If it matches, return true. * @param HTMLElement htmlNode * @param CSSRule cssRule */ function isMatchCssRule(htmlNode, cssRule) { // Simply use jQuery here to see if there cssRule matches the htmlNode... return $(htmlNode).is(cssRule.selectorText); } /** * Verifies if the cssRule implements the interface of type CSSImportRule. * @param CSSRule cssRule */ function isImportRule(cssRule) { return cssRule.constructor.toString().search("CSSImportRule") != -1; } /** * Webkit browsers contain this function, but other browsers do not (yet). * Implement it ourselves... * * Finds all matching CSS rules for the htmlNode. * @param HTMLElement htmlNode */ function getMatchedCSSRules(htmlNode) { var cssRules = new Array(); // Opera 8- don't support styleSheets[] array. if ( !document.styleSheets ) return null; // Loop through the stylesheets in the html document. for ( var i = 0; i < document.styleSheets.length; i++ ) { var currentCssRules = getCssRules(document.styleSheets[i], htmlNode) if ( currentCssRules != null ) cssRules.push.apply(cssRules, currentCssRules); } return cssRules; } /** * Checks if the CSSStyleRule has the property with 'important' attribute. * @param CSSStyleRule node * @param String property */ function isImportant(node, property) { if ( node.style.getPropertyPriority && node.style.getPropertyPriority(property) == 'important' ) return true; else if ( node.style.cssText && getPropertyPriority(node.style.cssText, property) == 'important' ) { // IE6-8 // IE thinks that cssText is part of rule.style return true; } } /** * getPropertyPriority function for IE6-8 * @param String cssText * @param String property */ function getPropertyPriority(cssText, property) { var props = cssText.split(";"); for ( var i = 0; i < props.length; i++ ) { if ( props[i].toLowerCase().indexOf(property.toLowerCase()) != -1 ) { // Found the correct property if ( props[i].toLowerCase().indexOf("!important") != -1 || props[i].toLowerCase().indexOf("! important") != -1) { // IE automaticaly adds a space between ! and important... return 'important'; // We found the important property for the property, return 'important'. } } } return ''; // We did not found the css property with important attribute. } /** * Outputs a debug message if debugging is enabled. * @param String msg */ function debugMsg(msg) { if ( debug ) { // For debugging purposes. if ( window.console ) console.log(msg); else alert(msg); } } /** * The main functionality required, to check whether a certain property of * some html element has the important attribute. * * @param HTMLElement htmlNode * @param String property */ function hasImportant(htmlNode, property) { // First check inline style for important. if ( isImportant(htmlNode, property) ) { // For debugging purposes. debugMsg("Inline contains important!"); return true; } var rules = getMatchedCSSRules(htmlNode); if ( rules == null ) { debugMsg("This browser does not support styleSheets..."); return false; } /** * Iterate through the rules backwards, since rules are * ordered by priority where the highest priority is last. */ for ( var i = rules.length; i-- > 0; ) { var rule = rules[i]; if ( isImportant(rule, property) ) { // For debugging purposes. debugMsg("Css contains important!"); return true; } } return false; } $(document).ready(function() { hasImportant($('#testdiv')[0], 'top'); }); ```
Understanding Firefox extension structure I'm trying to write a Firefox extension that intercepts a certain HTTP request and return static content without the request making it to the actual server (similar to AdBlock). I've looked up the tutorials and I've got a basic file layout. I've also worked out that I need to use the [nsITraceableChannel API](https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsITraceableChannel) and add an observer to do what I want and I have example code for that. Problem is, where do I actually put this code? And when is my extension actually loaded and executed? Is it running constantly and asynchronously in the background or is it loaded per page view? The documentation doesn't seem very clear on this. This extension won't need a GUI so I don't need the layouting XUL files (or do I?). I tried writing some XPCOM (I don't think I did it right though) component, registered it in `chrome.manifest` but it doesn't seem to run. Can anyone explain exactly how the Firefox extensions work and where should I put my actual JavaScript code to monitor requests? Or have I got the whole idea of what an extension is wrong? Is there a difference between add-ons, extensions and plugins?
Concerning the difference between add-ons, extensions and plugins you should look at [this answer](https://stackoverflow.com/questions/7575658/firefox-add-on-vs-extensions-vs-plugins/7580811#7580811). But in general, you seem to have the correct idea. The problem is, there are currently three very different types of extensions: 1. Classic extensions (not restartless): these will typically [overlay the browser window](https://developer.mozilla.org/en/XUL_Overlays) and run code from this overlay. Since there is one overlay per window, there will be as many code instances as browser windows. However, classic extensions can also register an XPCOM component ([via `chrome.manifest` as of Gecko 2.0](https://developer.mozilla.org/en/XPCOM/XPCOM_changes_in_Gecko_2.0#JavaScript_components)). This component will be loaded on first use and stay around for the entire browsing session. You probably want your component to load when the browser starts, for this you should register it in the `profile-after-change` category and implement `nsIObserver`. 2. Restartless extensions, also called [bootstrapped extensions](https://developer.mozilla.org/en/Extensions/Bootstrapped_extensions): these cannot register overlays which makes working with the browser UI somewhat more complicated. Instead they have a `bootstrap.js` script that will load when the extension is activated, this context will stay around in background until the browser is shut down or the extension is disabled. You can have XPCOM components in restartless extensions as well but you will have to register them manually (via [nsIComponentRegistrar.registerFactory()](https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsIComponentRegistrar#registerFactory%28%29) and [nsICategoryManager.addCategoryEntry()](https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsICategoryManager#addCategoryEntry%28%29)). You will also have to take care of unregistering the component if the extension is shut down. This is unnecessary if you merely need to add an observer, [nsIObserverService](https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsIObserverService#addObserver%28%29) will take any object implementing `nsIObserver`, not only one that has been registered as an XPCOM component. The big downside is: most MDN examples are about classic extensions and don't explain how you would do things in a restartless extension. 3. Extensions based on the [Add-on SDK](https://addons.mozilla.org/en-US/developers/builder): these are based on a framework that produces restartless extensions. The Add-on SDK has its [own API](https://addons.mozilla.org/en-US/developers/docs/sdk/1.7/) which is very different from what you usually do in Firefox extension - but it is simple, and it mostly takes care of shutting down the extension so that you don't have to do it manually. Extensions here consist of a number of modules, with `main.js` loading automatically and being able to load additional modules as necessary. Once loaded, each module stays around for as long as the extension is active. They run sandboxed but you can still [leave the sandbox](https://addons.mozilla.org/en-US/developers/docs/sdk/1.7/dev-guide/tutorials/chrome.html) and access XPCOM directly. However, you would probably use the internal [`observer-service` module](https://addons.mozilla.org/en-US/developers/docs/sdk/1.7/packages/api-utils/observer-service.html) instead.
How can I arrange numbers in this side-to-side pattern? I am trying to make this pattern in PHP: ``` 1 2 3 4 8 7 6 5 9 10 11 12 ``` I tried this, but was unsuccessful: ``` $num = 0; for ($i=0; $i<=2; $i++) { for ($j=1; $j<=5; $j++) { $num = $j - $i+$num; echo $num.""; $num--; } echo "</br>"; } ``` Can anyone help me please? Thanks in advance...
Here is the simplest and fastest code I was able to make using two loops. It's easier with three loops and there are multiple ways to achieve this but here is the simplest one according to me. ``` <?php $num = 1; $change = true; $cols = 5; $rows = 5; for ($i = 0; $i < $rows; $i++) { if (!$change) { $num += ($cols - 1); } for ($j = 0; $j < $cols; $j++) { echo $num . " "; if (!$change) { $num--; } else { $num++; } } if (!$change) { $num += ($cols + 1); } $change = !$change; echo "<br>"; } ``` **NOTE:** You have to define the number of columns in `$cols` variable. It will work with any case.
What is the difference between PublicKeyToken and public key? Each signed .NET has both a public key token (8 byte) and a public key (128 bytes). What is the difference between the 2, and why do we need two public "keys"?
Public Key token is just the **hash of the public key**. Here for [info](http://en.wikipedia.org/wiki/.NET_assembly). --- ## UPDATE > > Why we need public key? > > > Since assembly can be signed and signed assemblies will contain the public key. When loading DLL .NET will use the public key to validate the assembly against the signature. Signature can be only generated using the private key while public key itself can be used for validating the signature. This process makes sure assembly is not tampered with. From CLR via C#: > > Signing an assembly with a private key > ensures that the holder of the > corresponding public key produced the > assembly. When the assembly is > installed into the GAC, the system > hashes the contents of the file > containing the manifest and compares > the hash value with the RSA digital > signature value embedded within the PE > file (after unsigning it with the > public key). If the values are > identical, the file's contents haven't > been tampered with, and you know that > you have the public key that > corresponds to the publisher's private > key. In addition, the system hashes > the contents of the assembly's other > files and compares the hash values > with the hash values stored in the > manifest file's FileDef table. If any > of the hash values don't match, at > least one of the assembly's files has > been tampered with, and the assembly > will fail to install into the GAC. > > > --- ## UPDATE 2 Why public key *token* needed? Since public key is too big to work with so (Again from CLR visa C#): > > The size of public keys makes them > difficult to work with. To make things > easier for the developer (and for end > users too), public key tokens were > created. A public key token is a > 64-bit hash of the public key. > SN.exe's -tp switch shows the public > key token that corresponds to the > complete public key at the end of its > output. > Because public keys are such large > numbers, and a single assembly might > reference many assemblies, a large > percentage of the resulting file's > total size would be occupied with > public key information. To conserve > storage space, Microsoft hashes the > public key and takes the last 8 bytes > of the hashed value. These reduced > public key values—known as public key > tokens—are what are actually stored in > an AssemblyRef table. In general, > developers and end users will see > public key token values much more > frequently than full public key > values. Note, however, that the CLR > never uses public key tokens when > making security or trust decisions > because it is possible that several > public keys could hash to a single > public key token. > > >
Unique Salt per User using Flask-Security After reading here a bit about salting passwords, it seems that it's best to use a unique salt for each user. I'm working on implementing Flask-Security atm, and from the documentation it appears you can only set a global salt: ie SECURITY\_PASSWORD\_SALT = 'thesalt' Question: How would one go about making a unique salt for each password? Thanks! edit: from the docs on Flask-Security, I found this, which seems to again suggest that this module only uses a single salt for all passwords out of the box. ``` flask_security.utils.get_hmac(password) Returns a Base64 encoded HMAC+SHA512 of the password signed with the salt specified by SECURITY_PASSWORD_SALT. ```
Yes, Flask-Security does use per-user salts by design if using bcrypt (and other schemes such as des\_crypt, pbkdf2\_sha256, pbkdf2\_sha512, sha256\_crypt, sha512\_crypt). The config for 'SECURITY\_PASSWORD\_SALT' is only used for HMAC encryption. If you are using bcrypt as the hashing algorithm Flask-Security uses passlib for hashing and it generates a random salt during hashing. This confustion is noted in issue 268: <https://github.com/mattupstate/flask-security/issues/268> It can be verified in the code, walking from encrypt to passlib: flask\_security/utils.py (lines 143-151, 39, and 269) ``` def encrypt_password(password): ... return _pwd_context.encrypt(signed) _pwd_context = LocalProxy(lambda: _security.pwd_context) ``` flask\_security/core.py (269, 244-251, and 18) ``` pwd_context=_get_pwd_context(app) def _get_pwd_context(app): ... return CryptContext(schemes=schemes, default=pw_hash, deprecated=deprecated) from passlib.context import CryptContext ``` and finally from: <https://pythonhosted.org/passlib/password_hash_api.html#passlib.ifc.PasswordHash.encrypt> > > note that each call to encrypt() generates a new salt, > > >
Generate range of dates using CTE Oracle I want to generate a range of days between two different dates using recursive WITH clause in Oracle. ``` WITH CTE_Dates (cte_date) AS ( SELECT CAST(TO_DATE('10-02-2017', 'DD-MM-YYYY') AS DATE) cte_date FROM dual UNION ALL SELECT CAST( (cte_date + 1) AS DATE) cte_date FROM CTE_Dates WHERE TRUNC(cte_date) + 1 <= TO_DATE('20-02-2017', 'DD-MM-YYYY') ) SELECT * FROM CTE_Dates ``` The returned results are completely other than expected: ``` 10-02-2017 09-02-2017 08-02-2017 07-02-2017 06-02-2017 ... (unlimited) ``` The expected results: ``` 10-02-2017 11-02-2017 ... 19-02-2017 20-02-2017 ``` Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production. **Edit:** As I understood, this is a known Bug in Oracle, the bug exists through Oracle 11.2.0.2 and it was fixed in 11.2.0.3. Altarnative solution: ``` SELECT TRUNC (TO_DATE('10-02-2017', 'DD-MM-YYYY') + ROWNUM -1) dt FROM DUAL CONNECT BY ROWNUM <= (TO_DATE('20-02-2017', 'DD-MM-YYYY') - (TO_DATE('10-02-2017', 'DD-MM-YYYY'))) ```
This was a known bug in recursive CTE's in Oracle 11 (specifically with regard to date arithmetic). Fixed in Oracle 12. Exactly that behavior: whether you add or subtract in your code, the engine always subtracts, it never adds. **EDIT**: Actually, as Alex Poole pointed out in a Comment to the original post, the bug exists through Oracle 11.2.0.2 and it was fixed in 11.2.0.3. **End edit** Alas I am not a paying customer, so I can't quote chapter and verse, but with a little bit of Googling you will find links to this (including on OTN where I was involved in a few threads discussing this and other bugs in recursive CTEs - some were fixed, some are still bugs in Oracle 12.1). **Added** - here is one of those discussions: <https://community.oracle.com/thread/3974408>
Python: Use of decorators v/s mixins? I have understood the basics of decorators and mixins. Decorators add a new functionality to an object without changing other object instances of the same class, while a mixin is a kind of multiple inheritance used to inherit from multiple parent classes. Does it mean that decorators should be used when you'd need to modify only a single object instance and use mixins when you'd need a whole new class of objects. Or, is there something more to it that I might be missing? What can be real life use cases for both?
In my opinion, you need mixins when you have a few different classes that should have same functionality. Good examples of using mixins are Django's class-based views. For example, you have a few different classes: FormView, TemplateView, ListView. All of them have one similar piece of functionality: they have to render templates. Every one of these classes has a mixin, which adds methods required for template rendering. Another example is if you needed to add a class for an API that returns a JSON result. It could also be inherited from a base, View class. You simply skip template mixins, and define what you need (and probably write your own mixin for JSON encoding). Additionally, you may override some of methods proposed in mixins which allow you to modify some parts of common code for your local case. It's all about OOP, buddy! Long story short: **mixins add new functionalities**. Decorators are used to **modify existing functionalities**. For example, if you need to log what is returned from a method in your class. The right choice here is a decorator (added to appropriate methods). Hope it is helpful. If not, please ask questions. I will update my response.
Parallel computing in Julia - running a simple for-loop on multiple cores For starters, I have to say I'm completely new to parallel computing (and know close to nothing about computer science), so my understanding of what things like "workers" or "processes" actually are is very limited. I do however have a question about running a simple for-loop that presumably has no dependencies between the iterations in parallel. Let's say I wanted to do the following: ``` for N in 1:5:20 println("The N of this iteration in $N") end ``` If I simply wanted these messages to appear on screen and the order of appearance didn't matter, how could one achieve this in Julia 0.6, and for future reference in Julia 0.7 (and therefore 1.0)?
# Distributed Processing Start julia with e.g. `julia -p 4` if you want to use 4 cpus (or use the function `addprocs(4)`). In Julia 1.x, you make a parallel loop as following: ``` using Distributed @distributed for N in 1:5:20 println("The N of this iteration in $N") end ``` Note that every process have its own variables per default. For any serious work, have a look at the manual <https://docs.julialang.org/en/v1/manual/parallel-computing/>, in particular the section about SharedArrays. Another option for distributed computing are the function `pmap` or the package `MPI.jl`. # Threads Since Julia 1.3, you can also use Threads as noted by wueli. Start julia with e.g. `julia -t 4` to use 4 threads. Alternatively you can or set the environment variable `JULIA_NUM_THREADS` before starting julia. For example Linux/Mac OS: ``` export JULIA_NUM_THREADS=4 ``` In windows, you can use `set JULIA_NUM_THREADS 4` in the cmd prompt. Then in julia: ``` Threads.@threads for N = 1::20 println("N = $N (thread $(Threads.threadid()) of out $(Threads.nthreads()))") end ``` All CPUs are assumed to have access to shared memory in the examples above (e.g. "OpenMP style" parallelism) which is the common case for multi-core CPUs.
SELECT DISTINCT Cassandra in Spark I need a query that lists out the the unique *Composite Partition Keys* inside of spark. The query in CASSANDRA: `SELECT DISTINCT key1, key2, key3 FROM schema.table;` is quite fast, however putting the same sort of data filter in a RDD or spark.sql retrieves results incredibly slowly in comparison. e.g. ``` ---- SPARK ---- var t1 = sc.cassandraTable("schema","table").select("key1", "key2", "key3").distinct() var t2 = spark.sql("SELECT DISTINCT key1, key2, key3 FROM schema.table") t1.count // takes 20 minutes t2.count // takes 20 minutes ---- CASSANDRA ---- // takes < 1 minute while also printing out all results SELECT DISTINCT key1, key2, key3 FROM schema.table; ``` where the table format is like: ``` CREATE TABLE schema.table ( key1 text, key2 text, key3 text, ckey1 text, ckey2 text, v1 int, PRIMARY KEY ((key1, key2, key3), ckey1, ckey2) ); ``` Doesn't spark use cassandra optimisations in its' queries? How can I retreive this information efficiently?
## Quick Answers > > Doesn't spark use cassandra optimisations in its' queries? > > > Yes. But with SparkSQL only column pruning and predicate pushdowns. In RDDs it is manual. > > How can I retreive this information efficiently? > > > Since your request returns quickly enough, I would just use the Java Driver directly to get this result set. --- ## Long Answers While Spark SQL can provide some C\* based optimizations these are usually limited to predicate pushdowns when using the DataFrame interface. This is because the framework only provides limited information to the datasource. We can see this by doing an *explain* on the query you have written. ## Lets start with the SparkSQL example ``` scala> spark.sql("SELECT DISTINCT key1, key2, key3 FROM test.tab").explain == Physical Plan == *HashAggregate(keys=[key1#30, key2#31, key3#32], functions=[]) +- Exchange hashpartitioning(key1#30, key2#31, key3#32, 200) +- *HashAggregate(keys=[key1#30, key2#31, key3#32], functions=[]) +- *Scan org.apache.spark.sql.cassandra.CassandraSourceRelation test.tab[key1#30,key2#31,key3#32] ReadSchema: struct<key1:string,key2:string,key3:string> ``` So your Spark example will actually be broken into several steps. 1. *Scan : Read all the data from this table. This is means serializing every value from the C* machine to the Spark Executor JVM, in other words lots of work. 2. \*HashAggregate/Exchange/Hash Aggregate: Take the values from each executor, hash them locally then exchange the data between machines and hash again to ensure uniqueness. In layman's terms this means creating large hash structures, serializing them, running a complicated distributed sortmerge, then running a hash again. (Expensive) Why doesn't any of this get pushed down to C\*? This is because [Datasource](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala) (The CassandraSourceRelation in this case) is not given the information about the *Distinct* part of the query. This is just part of how Spark currently works. [Docs on what is pushable](https://github.com/datastax/spark-cassandra-connector/blob/master/doc/14_data_frames.md#automatic--predicate-pushdown-and-column-pruning) ## So what about the RDD version? With RDDS we give a direct set of instructions to Spark. This means if you want to push something down it must be [manually specified](https://github.com/datastax/spark-cassandra-connector/blob/master/doc/3_selection.md#server-side-data-selection-filtering-and-grouping). Let's see the debug output of the RDD request ``` scala> sc.cassandraTable("test","tab").distinct.toDebugString res2: String = (13) MapPartitionsRDD[7] at distinct at <console>:45 [] | ShuffledRDD[6] at distinct at <console>:45 [] +-(13) MapPartitionsRDD[5] at distinct at <console>:45 [] | CassandraTableScanRDD[4] at RDD at CassandraRDD.scala:19 [] ``` Here the issue is that your "distinct" call is a generic operation on an [RDD](https://github.com/apache/spark/blob/4df51361a5ff1fba20524f1b580f4049b328ed32/core/src/main/scala/org/apache/spark/rdd/RDD.scala#L402-L407) and not specific to Cassandra. Since RDDs require all optimizations to be explicit (what you type is what you get) Cassandra never hears about this need for "Distinct" and we get a plan that is almost identical to our Spark SQL version. Do a full scan, serialize all of the data from Cassandra to Spark. Do a Shuffle and then return the results. ## So what can we do about this? With SparkSQL this is about as good as we can get without adding new rules to [Catalyst](https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html) (the SparkSQL/Dataframes Optimizer) to let it know that Cassandra can handle some *distinct* calls at the server level. It would then need to be implemented for the CassandraRDD subclasses. For RDDs we would need to add a function like the already existing `where`, `select`, and `limit`, calls to the Cassandra RDD. A new `Distinct` call could be added [here](https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/rdd/CassandraRDD.scala) although it would only be allowable in specific situations. This is a function that currently does not exist in the SCC but could be added relatively easily since all it would do is prepend `DISTINCT` to [requests](https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/rdd/CassandraTableScanRDD.scala#L299-L302) and probably add some checking to make sure it is a `DISTINCT` that makes sense. ## What can we do right now today without modifying the underlying connector? Since we know the exact CQL request that we would like to make we can always use the Cassandra driver directly to get this information. The Spark Cassandra connector provides a driver pool we can use or we could just use the Java Driver natively. To use the pool we would do something like ``` import com.datastax.spark.connector.cql.CassandraConnector CassandraConnector(sc.getConf).withSessionDo{ session => session.execute("SELECT DISTINCT key1, key2, key3 FROM test.tab;").all() } ``` And then parallelize the results if they are needed for further Spark work. If we really wanted to distribute this it would be necessary to most likely add the function to the Spark Cassandra Connector as I described above.
Why does FutureBuilder snapshot.data return "Instance of Post" instead of json? I'm expecting a JSON object of data but instead am getting `Instance of 'Post'` I'm new to flutter and trying to hit an API with a post request using http.dart package. I'm using an async future and a future building to populate a widget with the returned data (following the flutter example here: <https://flutter.io/docs/cookbook/networking/fetch-data>). ``` Future<Post> fetchPost() async { String url = "https://example.com"; final response = await http.post(url, headers: {HttpHeaders.contentTypeHeader: 'application/json'}, body: jsonEncode({"id": "1"})); if (response.statusCode == 200) { print('RETURNING: ' + response.body); return Post.fromJson(json.decode(response.body)); } else { throw Exception('Failed to load post'); } } class Post { final String title; Post({this.title}); factory Post.fromJson(Map<String, dynamic> json) { return Post( title: json['title'] ); } } void main() => runApp(MyApp(post: fetchPost())); class MyApp extends StatelessWidget { final Future<Post> post; MyApp({Key key, this.post}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( title: 'Fetch Data Example', theme: ThemeData( primarySwatch: Colors.blue, ), home: Scaffold( appBar: AppBar( title: Text('Fetch Data Example'), ), body: Center( child: FutureBuilder<Post>( future: post, builder: (context, snapshot) { if (snapshot.hasData) { return Text(snapshot.data.toString()); } else if (snapshot.hasError) { return Text("${snapshot.error}"); } // By default, show a loading spinner return CircularProgressIndicator(); }, ), ), ), ); } } ``` I'm expecting the return statement in the FutureBuilder to give me a json object. This is an existing API so I know it works and it returns what I'm expecting.
I am not sure what you refer to when you say "JSON object". Dart is a typed language and the way you represent anything coming as a json string is either a nested `Map<String, dynamic>` or a class (like in your case). If it is a class there is code that performs the actual de-/serialization. In your case the fromJson method in conjunction with the json.decode() can do the deserialization, but you have nothing for serialization yet. So your future builder is returning what you ask it to. The following piece of code defines the type of future clearly returning a `Post` object: ``` final Future<Post> post; ``` and you use it when creating your future builder: ``` child: FutureBuilder<Post>( future: post, ``` If you want a JSON `String` (or a `Map<String,dynamic>`) returned you need to start by doing that in your `fetchPost` method (which currently is also returning a `Post` object. For example: ``` Future<Map<String, dynamic>> fetchPost() async { // <------ CHANGED THIS LINE String url = "https://example.com"; final response = await http.post(url, headers: {HttpHeaders.contentTypeHeader: 'application/json'}, body: jsonEncode({"id": "1"})); if (response.statusCode == 200) { print('RETURNING: ' + response.body); return json.decode(response.body); // <------ CHANGED THIS LINE } else { throw Exception('Failed to load post'); } } ``` or like this: ``` Future<String> fetchPost() async { // <------ CHANGED THIS LINE String url = "https://example.com"; final response = await http.post(url, headers: {HttpHeaders.contentTypeHeader: 'application/json'}, body: jsonEncode({"id": "1"})); if (response.statusCode == 200) { print('RETURNING: ' + response.body); return response.body; // <------ CHANGED THIS LINE } else { throw Exception('Failed to load post'); } } ``` Then you need to work your way up until you change the Future in your MyApp class. ``` final Future<Map<String,dynamic>> post; ``` Please read [these docs](https://flutter.io/docs/development/data-and-backend/json) to learn about JSON in flutter.
How to upload images with cloud kit using swift? How do I upload and load back images from cloud kit with swift? What attribute type do I use? ![enter image description here](https://i.stack.imgur.com/h1FD7.png) What code do I use? This is the code I use currently... ``` func SaveImageInCloud(ImageToSave: UIImage) { let newRecord:CKRecord = CKRecord(recordType: "ImageRecord") newRecord.setValue(ImageToSave, forKey: "Image") if let database = self.privateDatabase { database.saveRecord(newRecord, completionHandler: { (record:CKRecord!, error:NSError! ) in if error != nil { NSLog(error.localizedDescription) } else { dispatch_async(dispatch_get_main_queue()) { println("finished") } } }) } ```
You need to create a CKAsset and add that to your record. You can do that with code like this: ``` func SaveImageInCloud(ImageToSave: UIImage) { let newRecord:CKRecord = CKRecord(recordType: "ImageRecord") let nsDocumentDirectory = NSSearchPathDirectory.DocumentDirectory let nsUserDomainMask = NSSearchPathDomainMask.UserDomainMask if let paths = NSSearchPathForDirectoriesInDomains(nsDocumentDirectory, nsUserDomainMask, true) { if paths.count > 0 { if let dirPath = paths[0] as? String { let writePath = dirPath.stringByAppendingPathComponent("Image2.png") UIImagePNGRepresentation(ImageToSave).writeToFile(writePath, atomically: true) var File : CKAsset? = CKAsset(fileURL: NSURL(fileURLWithPath: writePath)) newRecord.setValue(File, forKey: "Image") } } } if let database = self.privateDatabase { database.saveRecord(newRecord, completionHandler: { (record:CKRecord!, error:NSError! ) in if error != nil { NSLog(error.localizedDescription) } else { dispatch_async(dispatch_get_main_queue()) { println("finished") } } }) } ```
SwiftUI ForEach not iterating when array changes. List is empty and ForEach does run I'm trying to use MultiPeer Connectivity framework with swift ui and am having issues with using ForEach in my view. I have a singleton that I'm using to track connected users in an array: ``` class MPCManager: NSObject { static let instance = MPCManager() var devices: [Device] = [] ... ``` And my device class: ``` class Device: NSObject { let peerID: MCPeerID var session: MCSession? var name: String var state = MCSessionState.notConnected var lastMessageReceived: Message? ... } ``` When the MultiPeer connectivity frame finds new peers the MPCManager is appending new devices to the array. I have confirmed this in the debugger. The problem comes when I try to display the devices in a list. Here is the code that I'm using: ``` struct ContentView : View { var devices: [Device] = MPCManager.instance.devices var body: some View { List { ForEach(self.devices.identified(by: \.name)) { device in Text(device.name) } } } } ``` When the app starts, the list is displayed but it is empty. When I put a breakpoint in the view code inside the ForEach execution never stops. When I change the array to a hardcoded list of values, it displays just fine. I have also tried referencing the array from the static instance directly in my view like this: ``` ForEach(self.devices.identified(by: \.name)) { device in Text(device.name) } ``` Still nothing. I'm very new to swift so there may be something easy that I'm missing but I just don't see it. Any ideas?
There are a couple issues here as far as I can tell. First, I would suggest you try this with your `MPCManager`: ``` import SwiftUI import Combine class MPCManager: NSObject, BindableObject { var didChange = PassthroughSubject<Void, Never>() var devices: [Device] = [] { didSet { self.didChange.send(()) } } } ``` Then, in your `ContentView`, do this: ``` struct ContentView : View { @ObjectBinding var manager: MPCManager = MPCManager() var body: some View { List { ForEach(self.manager.devices.identified(by: \.name)) { device in Text(device.name) } } } } ``` The main difficulty with answering your question is that I can't run your code. Your question would be more useful to others (and much easier to answer) if you could distill your code down to something that people who might know the answer could just copy and paste into Xcode. # Update As of Xcode Beta 4, `identified(by:)` has been replaced by specific initializers for `List` and `ForEach`, and as of Xcode Beta 5, `BindableObject` has been replaced by `ObservableObject` and `@ObjectBinding` has been replaced by `@ObservedObject`. ``` import SwiftUI import Combine class MPCManager: NSObject, ObservableObject { var objectWillChange = PassthroughSubject<Void, Never>() var devices: [Device] = [] { willSet { self.objectWillChange.send() } } } ``` ``` struct ContentView : View { @ObservedObject var manager: MPCManager = MPCManager() var body: some View { List { ForEach(self.manager.devices, id: \.name) { device in Text(device.name) } } } } ```
Why does bind of the function instance supply the original value to the next computation? As a functional Javascript developer with only a vague understanding of Haskell I really have a hard time to understand Haskell idioms like monads. When I look at `>>=` of the function instance ``` (>>=) :: (r -> a) -> (a -> (r -> b)) -> r -> b instance Monad ((->) r) where f >>= k = \ r -> k (f r) r // Javascript: ``` and its application with Javascript ``` const bind = f => g => x => g(f(x)) (x); const inc = x => x + 1; const f = bind(inc) (x => x <= 5 ? x => x * 2 : x => x * 3); f(2); // 4 f(5); // 15 ``` the monadic function `(a -> (r -> b))` (or `(a -> m b)`) provides a way to choose the next computation depending on the previous result. More generally, the monadic function along with its corresponding `bind` operator seems to give us the capability to define what function composition means in a specific computational context. It is all the more surprising that the monadic function doesn't supply the result of the previous computation to the subsequent one. Instead, the original value is passed. I'd expect `f(2)`/`f(5)` to yield `6`/`18`, similar to normal function composition. Is this behavior specific to functions as monads? What do I misunderstand?
I think your confusion arises from using functions that are too simple. In particular, you write ``` const inc = x => x + 1; ``` whose type is a function that returns values in the *same* space as its input. Let's say `inc` is dealing with integers. Because both its input and output are integers, if you have another function `foo` that takes integers, it is easy to imagine using the *output* of `inc` as an *input* to `foo`. The real world includes more exciting functions, though. Consider the function `tree_of_depth` that takes an integer and creates a tree of strings of that depth. (I won't try to implement it, because I don't know enough javascript to do a convincing job of it.) Now all of a sudden it's harder to imagine passing the output of `tree_of_depth` as an input to `foo`, since `foo` is expecting integers and `tree_of_depth` is producing trees, right? The only thing we can pass on to `foo` is the *input* to `tree_of_depth`, because that's the only integer we have lying around, even after running `tree_of_depth`. Let's see how that manifests in the Haskell type signature for bind: ``` (>>=) :: (r -> a) -> (a -> r -> b) -> (r -> b) ``` This says that `(>>=)` takes two arguments, each functions. The first function can be of any old type you like -- it can take a value of type `r` and produce a value of type `a`. In particular, you don't have to promise that `r` and `a` are the same at all. But once you pick its type, then the type of the next function argument to `(>>=)` is constrained: it has to be a function of two arguments whose types are the *same* `r` and `a` as before. Now you can see why we have to pass the same value of type `r` to both of these functions: the first function produces an `a`, not an updated `r`, so we have no other value of type `r` to pass to the second function! Unlike your situation with `inc`, where the first function happened to also *produce* an `r`, we may be producing some other very different type. This explains why bind has to be implemented the way it is, but maybe doesn't explain why this monad is a useful one. There is writing elsewhere on that. But the canonical use case is for configuration variables. Suppose at program start you parse a configuration file; then for the rest of the program, you want to be able to influence the behavior of various functions by looking at information from that configuration. In all cases it makes sense to use the same configuration information -- it doesn't need to change. Then this monad becomes useful: you can have an implicit configuration value, and the monad's bind operation makes sure that the two functions you're sequencing both have access to that information without having to manually pass it in to both functions. P.S. You say > > It is all the more surprising that the monadic function doesn't supply the result of the previous computation to the subsequent one. > > > which I find slightly imprecise: in fact in `m >>= f`, the function `f` gets *both* the result of `m` (as its first argument) *and* the original value (as its second argument).
What is the best practice with media-queries in CSS3? I'm looking answers for some questions about CSS3 feature - **`Media Queries`**: 1. Which way is **better** (for browser due to the performance) for declaring css rules for different resolutions? ``` //this in head: <link rel="stylesheet/less" href="/Content/site1024.less" media="screen and (max-width: 1024px)" /> //or this in css file: @media only screen and (max-width: 1024px){ //styles here } ``` 2. What is difference between **`max-device-width`** and **`max-width`**? Is it only rule addressed for mobile(`max-device-width`) or desktop(`max-width`) browsers? 3. If I write media query rule for tablet with resolution `1280x800` where user can also use portrait/landscape mode, how should it look? I should write rules for `max-width: 800px` and `max-width: 1280px` or there is another way? 4. If I write rules I should write something like this: ``` <link ... media="only screen and (min-device-width: 481px) and (max-device-width: 1024px)... /> ``` or instead this two: ``` <link ... media="only screen and (max-device-width: 480px) ... /> <link ... media="only screen and (max-device-width: 1024px) ... /> ``` --- *P.S. Please excuse any spelling or grammatical mistakes, English isn't my first language* *P.S.S. Before I posted this question I spend a while to search on stackoverflow and didn't find information about this question. If I was wrong and there is similar question I will delete my post.*
1. Rules in css file to reduce number of requests (better for performance). 2. - `max-width` is the width of the target display area - `max-device-width` is the width of the device's entire rendering area 3. The another way I know to target portrait or landscape is to add `orientation` like this: ``` /* portrait */ @media only screen and (min-device-width: 768px) and (max-device-width: 1024px) and (orientation: portrait) { /* styles here */ } /* landscape */ @media only screen and (min-device-width: 768px) and (max-device-width: 1024px) and (orientation: landscape) { /* styles here */ } ``` 4. To define a stylesheet for mobile devices with a width between 320 and 480 pixels you have to write: ``` <link rel="stylesheet" media="screen and (min-width: 320px) and (max-width: 480px)" href="mobile.css"> ```
Tastypie Attributes & Related Names, Empty Attribute Error I'm getting this error: ``` The object '' has an empty attribute 'posts' and doesn't allow a default or null value. ``` I'm trying to get the number of 'votes' on a post and return it in my models.py: ``` class UserPost(models.Model): user = models.OneToOneField(User, related_name='posts') date_created = models.DateTimeField(auto_now_add=True, blank=False) text = models.CharField(max_length=255, blank=True) def get_votes(self): return Vote.objects.filter(object_id = self.id) ``` Here's my resource: ``` class ViewPostResource(ModelResource): user = fields.ForeignKey(UserResource,'user',full=True) votes= fields.CharField(attribute='posts__get_votes') class Meta: queryset = UserPost.objects.all() resource_name = 'posts' authorization = Authorization() filtering = { 'id' : ALL, } ``` What am I doing wrong?
The `attribute` value that you have defined isn't proper. You can achieve what you want in a few ways. Define a `dehydrate` method: ``` def dehydrate(self, bundle): bundle.data['custom_field'] = bundle.obj.get_votes() return bundle ``` Or set the `get_votes` as property and define the field in resource like so (I recommend this one as it is the most clear): ``` votes = fields.CharField(attribute='get_votes', readonly=True, null=True) ``` Or define it this way: ``` votes = fields.CharField(readonly=True, null=True) ``` And in the resources define the dehydrate\_votes method like so: ``` def dehydrate_votes(self, bundle): return bundle.obj.get_votes() ```
Is committing empty transactions expensive? In a certain Oracle 11.2 environment, I observe unnecessary commits, e.g. ``` -- some updates/inserts etc. commit; select * from mytable where somecond = 23; commit; ``` Or even: ``` update mytable set foo = '42'; commit; commit; ``` Thus, in both examples the 2nd commit is unnecessary, because the transaction is 'empty' - there is nothing to commit. The question is: How expensive are those unnecessary commits? Is the Oracle DB 'intelligent' enough to detect such empty transactions and replace those unnecessary commits with [NOPs](https://en.wikipedia.org/wiki/NOP) (nothing)? **Background:** those redundant commits sometimes seem to come from layers of certain framework where the programmer is unaware of the 'hidden' logic - or sometimes they are just oversights. Depending on how expensive they are (in terms of DB performance) it will make sense to fix the code with high priority.
Empty commits are not expensive and can be ignored. Empty commits only require a small amount of CPU and will not block other processes or prevent scalability. A normal commit is slow because it causes data to be written to disk to ensure durability. Database changes require REDO (in case the database crashes before the data can be written fully into the data files, allowing a recovery to roll foward), UNDO (so other transactions can see the data as it looked before the change), recording the new System Change Number to the control file, etc. A commit does not finish until those things are written to disk. An empty commit does not do any of those things. They use CPU time but probably just enough to check that there are no changes that need to be made. CPU time is cheap compared to writing to data files, redo logs, or control files. It should scale well. The below examples show that the real problem is when a COMMIT is executed after every tiny change. The run time alone is not enough to compare these three methods. If you look at the waits in GV$ACTIVE\_SESSION\_HISTORY, or the disk I/O (from Windows Resource Monitor, Solaris truss, etc), you'll see that example #1 below only uses CPU. Example #3 uses CPU, redo logs, data files, and control files. ``` --#1: Empty commits. --15 seconds for 1 million commits. --Only CPU waits, almost no I/O is generated. begin for i in 1..1000000 loop commit; end loop; end; / --#2: Just inserts. --34 seconds for 1 million inserts. --CPU waits, plus some "other" waits. create table test1(a number); begin for i in 1..1000000 loop insert into test1 values(1); end loop; commit; end; / --#3: Inserts and commits. --107 seconds for 1 million inserts and commits. --Lots of CPU waits, lots of "other" waits. begin for i in 1..1000000 loop insert into test1 values(1); commit; end loop; end; / ```
Error : Two output file names resolved to the same output path: "obj\Debug\Project1.Form1.resources" I'm getting the error: ``` Two output file names resolved to the same output path: "obj\Debug\Project1.Form1.resources" ``` This error comes while trying to run a windows form application I created. Some searches showed me that this occurs due to the occurrence of two .resx files. I have two .resx files in my application. Also I have two forms in my application. I created the second form by copying the first form and renaming and modifying the copy. The two .resx files are form1.resx and form2.resx. How can I remove this error?
Though I don't know why will you do it, you can use these instructions to copy properly a form. It is not recommended. It is better to inherit or use user control. But if you must: 1. Delete the second form. 2. Recreate it by actually creating a form 3. Copy the `InitializeComponent` method from `form1.designer` to the new form 4. Also copy the part below `InitializeComponent`. 5. Copy the code of `form1` to the new form, make sure to fix the constructor 6. Please do not copy a full form using copy paste **EDIT** When someone pushes the change page button you can do: ``` private void button1_Click(object sender, EventArgs e) { Form2 frm = new Form2(NextPage); frm.Show(); this.Hide(); } ``` Now this is very basic syntax. you might want to have a master form that holds all the forms so you won't create over and over new forms. The design is up to you. this example will give you basics on how to open and close forms.
Recommendations for a "Getting started with Greasemonkey" tutorial I'm interested in writing some Chrome compatible [Greasemonkey](http://en.wikipedia.org/wiki/Greasemonkey) scripts, but I'm finding that there are few updated tutorials on how to really get started properly with writing userscripts. [Dive Into Greasemonkey](http://web.archive.org/web/20110726001221/http://diveintogreasemonkey.org/) by [Mark Pilgrim](http://en.wikipedia.org/wiki/Mark_Pilgrim_%28software_developer%29) is five years old now and hasn't been updated. Searches for tutorials are rife with garbage hits, affiliate blog links and general nonsense. The wiki for Greasemonkey is nearly all reference, not really a "this is a good place to start" kind of article. Can you recommend a good, updated tutorial for getting started writing userscripts? I define "good" as: - How to setup an environment - Basic skills required to know - Common use examples - Best practices I define "updated" as: - Written or updated in the last 18 months
I am pretty sure there isn't a tutorial that meets all your requirements. Check out [Jaxov's post / "How To Use Greasemonkey Scripts In Google Chrome?"](https://web.archive.org/web/20210501080114/http://jaxov.com/2009/07/how-to-use-greasemonkey-scripts-in-google-chrome/) for a start. Anyway, Greasemonkey's not that hard once you know JavaScript; even though Greasemonkey on Chrome can be a bit of a pain. Play around with it; examine code that works. You could be the one to write the updated "Getting Started" documentation ;) Also for Chrome, check out [Tampermonkey](https://chrome.google.com/extensions/detail/dhdgffkkebhmkfjojejmpbldmpobfkfo).
Spring MVC 3.2 Thymeleaf Ajax Fragments I'm building application with Spring MVC 3.2 and Thymeleaf templating engine. I'm a beginner in Thymeleaf. I have everything working, including Thymeleaf but I was wondering if anyone knows of a simple and clear toturial on how to do simple Ajax request to controller and in result rendering only a part of a template (fragment). My app has everything configured (Spring 3.2, spring-security, thymeleaf, ...) and works as expected. Now I would like to do Ajax request (pretty simple with jQuery but I don't wan't to use is since Thymeleaf in its tutorial, chapter 11: Rendering Template Fragments ([link](http://www.thymeleaf.org/doc/html/Thymeleaf-Spring3.html)) mentiones it can be done with fragments. Currently I have in my Controller ``` @RequestMapping("/dimensionMenuList") public String showDimensionMenuList(Model model) { Collection<ArticleDimensionVO> articleDimensions; try { articleDimensions = articleService.getArticleDimension(ArticleTypeVO.ARTICLE_TYPE); } catch (DataAccessException e) { // TODO: return ERROR throw new RuntimeException(); } model.addAttribute("dimensions", articleDimensions); return "/admin/index :: dimensionMenuList"; } ``` the part of the view where I would like to replace `<ul></ul>` menu items: ``` <ul th:fragment="dimensionMenuList" class="dropdown-menu"> <li th:unless="${#lists.isEmpty(dimensions)}" th:each="dimension : ${dimensions}"> <a href="#" th:text="${dimension.dimension}"></a> </li> </ul> ``` Any clue is greatly appreciated. Especially if I don't have to include any more frameworks. It's already too much for java web app as it is.
Here is an approach I came across in a [blog post](http://xpadro.blogspot.in/2014/02/thymeleaf-integration-with-spring-part-2.html): I didn't want to use those frameworks so in this section I'm using jQuery to send an AJAX request to the server, wait for the response and partially update the view (fragment rendering). **The Form** ``` <form> <span class="subtitle">Guest list form</span> <div class="listBlock"> <div class="search-block"> <input type="text" id="searchSurname" name="searchSurname"/> <br /> <label for="searchSurname" th:text="#{search.label}">Search label:</label> <button id="searchButton" name="searchButton" onclick="retrieveGuests()" type="button" th:text="#{search.button}">Search button</button> </div> <!-- Results block --> <div id="resultsBlock"> </div> </div> </form> ``` This form contains an input text with a search string (searchSurname) that will be sent to the server. There's also a region (resultsBlock div) which will be updated with the response received from the server. When the user clicks the button, the retrieveGuests() function will be invoked. ``` function retrieveGuests() { var url = '/th-spring-integration/spring/guests'; if ($('#searchSurname').val() != '') { url = url + '/' + $('#searchSurname').val(); } $("#resultsBlock").load(url); } ``` The jQuery load function makes a request to the server at the specified url and places the returned HTML into the specified element (resultsBlock div). If the user enters a search string, it will search for all guests with the specified surname. Otherwise, it will return the complete guest list. These two requests will reach the following controller request mappings: ``` @RequestMapping(value = "/guests/{surname}", method = RequestMethod.GET) public String showGuestList(Model model, @PathVariable("surname") String surname) { model.addAttribute("guests", hotelService.getGuestsList(surname)); return "results :: resultsList"; } @RequestMapping(value = "/guests", method = RequestMethod.GET) public String showGuestList(Model model) { model.addAttribute("guests", hotelService.getGuestsList()); return "results :: resultsList"; } ``` Since Spring is integrated with Thymeleaf, it will now be able to return fragments of HTML. In the above example, the return string "results :: resultsList" is referring to a fragment named resultsList which is located in the results page. Let's take a look at this results page: ``` <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org" lang="en"> <head> </head> <body> <div th:fragment="resultsList" th:unless="${#lists.isEmpty(guests)}" id="results-block"> <table> <thead> <tr> <th th:text="#{results.guest.id}">Id</th> <th th:text="#{results.guest.surname}">Surname</th> <th th:text="#{results.guest.name}">Name</th> <th th:text="#{results.guest.country}">Country</th> </tr> </thead> <tbody> <tr th:each="guest : ${guests}"> <td th:text="${guest.id}">id</td> <td th:text="${guest.surname}">surname</td> <td th:text="${guest.name}">name</td> <td th:text="${guest.country}">country</td> </tr> </tbody> </table> </div> </body> </html> ``` The fragment, which is a table with registered guests, will be inserted in the results block.
Zend\_Db: How to connect to a MySQL database over SSH tunnel? How can I connect to a MySQL database that requires an SSH tunnel using PHP and the Zend Framework?
Just start up SSH tunnel and use the local port as your MySQL port. For example, you start tunnel as this, ``` ssh -f [email protected] -L 3306:mysql-server.com:3306 -N ``` And you can connect to MySQL like this, ``` $conn = mysql_connect('localhost', 'mysql_user', 'mysql_password'); ``` For zend\_db, you do this, ``` $config = new Zend_Config( array( 'database' => array( 'adapter' => 'Mysqli', 'params' => array( 'host' => 'localhost', 'dbname' => 'my_db', 'username' => 'mysql_user', 'password' => 'mysql_password', ) ) ) ); $db = Zend_Db::factory($config->database); ```
JavaScript switch case using enum I have an "enum" declared like so: ``` var PlaceType = { PASSABLE_TERRAIN: 1, IMPASSABLE_TERRAIN: 0, SOMEWHAT_PASSABLE_TERRAIN: 2, PATH: 3 }; ``` and a function declared like this: ``` setPlaceType(placeType) { this.clear = false; this.placeType = placeType; alert("before switch "+(PlaceType.SOMEWHAT_PASSABLE_TERRAIN==this.placeType)); switch(this.placeType) { case PlaceType.PASSABLE_TERRAIN: { alert("Case PASSABLE"); break; } case PlaceType.IMPASSABLE_TERRAIN: { alert("Case IMPASSABLE"); break; } case PlaceType.SOMEWHAT_PASSABLE_TERRAIN: { alert("Case SOMEWHAT_PASSABLE"); break; } case PlaceType.PATH: { alert("Case PATH"); break; } default: { alert("case default"); } } } ``` if I call it like this: ``` setPlaceType(1); ``` I get the following alerts: "before switch true", "case default" if I call it like this: ``` setPlaceType(2); ``` I get the following alerts: "before switch false", "case default" In other words, the function is called with the proper argument, which, when doing (what it seems to me to be) the same comparison as the switch but via "==" I get correct behavior, but the switch never matches the values to the appropriate case. Does anybody know why?
The comparison operator will cast both operands to strings if either operator is a string. If you pass in a string, you are comparing `string == number` which will cast the number to a string and, in the case of passing the string `'2'`, it will be true. switch case comparison uses the identity operator `===` and will fail if the operands are not the same type. long story short, make sure you are always passing a number if your cases are comparing against numbers, you can double check like this: ``` setPlaceType(placeType) { if (typeof placeType !== 'number') { throw new Error('You must pass a number to setPlaceType!'); } ... } ``` also, you should be calling your function like this: ``` setPlaceType(PlaceType.PASSABLE_TERRAIN); ``` otherwise there's not really any point to using the "enumeration" (i use that term loosely) object.
How does one use Elmah in ASP.NET 5/vNext/Core? I am a bit confused about how to use Elmah in the ASP.NET 5 / MVC 6 projects. I got the package from nuget and it added `"Elmah.Mvc": "2.1.2"` to dependencies in project.json. I am not sure where to go from here - back in the day, nuget would add entries to the web.config which is now gone. And I can't seem to find any examples on their github or elsewhere. Am I missing something simple?
Instead of using ELMAH, it's not hard to implement error logging manually. This process will catch any exception that occurs in the project and log it to a database table. To do this, add the following to the Configure method in Startup.cs ``` if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler(builder => { builder.Run(async context => { context.Response.StatusCode = (int)System.Net.HttpStatusCode.InternalServerError; context.Response.ContentType = "text/html"; var error = context.Features.Get<Microsoft.AspNetCore.Diagnostics.IExceptionHandlerFeature>(); if (error != null) { LogException(error.Error, context); await context.Response.WriteAsync("<h2>An error has occured in the website.</h2>").ConfigureAwait(false); } }); }); } ``` Include this in Startup.cs as well: ``` private void LogException(Exception error, HttpContext context) { try { var connectionStr = Configuration["ConnectionString"]; using (var connection = new System.Data.SqlClient.SqlConnection(connectionStr)) { var command = connection.CreateCommand(); command.CommandText = @"INSERT INTO ErrorLog (Application, Host, Type, Source, Path, Method, Message, StackTrace, [User], WhenOccured) VALUES (@Application, @Host, @Type, @Source, @Path, @Method, @Message, @StackTrace, @User, @WhenOccured)"; connection.Open(); if (error.InnerException != null) error = error.InnerException; command.Parameters.AddWithValue("@Application", this.GetType().Namespace); command.Parameters.AddWithValue("@Host", Environment.MachineName); command.Parameters.AddWithValue("@Type", error.GetType().FullName); command.Parameters.AddWithValue("@Source", error.Source); command.Parameters.AddWithValue("@Path", context.Request.Path.Value); command.Parameters.AddWithValue("@Method", context.Request.Method); command.Parameters.AddWithValue("@Message", error.Message); command.Parameters.AddWithValue("@StackTrace", error.StackTrace); var user = context.User.Identity?.Name; if (user == null) command.Parameters.AddWithValue("@User", DBNull.Value); else command.Parameters.AddWithValue("@User", user); command.Parameters.AddWithValue("@WhenOccured", DateTime.Now); command.ExecuteNonQuery(); } } catch { } } ``` Note that you will have to create a table in your database with the structure used in this function.
Pandas, group by resample and fill missing values with zero I have the following code ``` import pandas as pd data = {'date': ['2014-05-01', '2014-05-02', '2014-05-04', '2014-05-01', '2014-05-03', '2014-05-04'], 'battle_deaths': [34, 25, 26, 15, 15, 14], 'group': [1, 1, 1, 2, 2, 2]} df = pd.DataFrame(data, columns=['date', 'battle_deaths', 'group']) df['date'] = pd.to_datetime(df['date']) df = df.set_index('date') df = df.sort_index() ``` I want to have a battle deaths count per group without any gaps in the dates. Something like ``` battle_deaths group date 2014-05-01 34 1 2014-05-01 15 2 2014-05-02 25 1 2014-05-02 0 2 <--added with battle_deaths = 0 to fill the date range 2014-05-03 0 1 <--added 2014-05-03 15 2 2014-05-04 26 1 2014-05-04 14 2 ``` I have tried the following but it doesn't work(because the fillna method does not take a number, but adding it here to show what I want to achieve) ``` df.groupby(df.group.name).resample('D').fillna(0) ``` How can I do this with pandas?
Use [`Resampler.asfreq`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.resample.Resampler.asfreq.html) with parameter `fill_value=0`: ``` df = df.groupby('group').resample('D')['battle_deaths'].asfreq(fill_value=0).reset_index() print (df) group date battle_deaths 0 1 2014-05-01 34 1 1 2014-05-02 25 2 1 2014-05-03 0 3 1 2014-05-04 26 4 2 2014-05-01 15 5 2 2014-05-02 0 6 2 2014-05-03 15 7 2 2014-05-04 14 ```
GridView showing duplicate Item I am really surprised, what is happening with the code, the gridView items are showing duplicate at 11th position every time and 12th position also sometimes. [![enter image description here](https://i.stack.imgur.com/iRzFx.png)](https://i.stack.imgur.com/iRzFx.png) You can see the "Desktop" grid has been populated twice. **Grid Icon & Name from a constant file** ``` public static final String[] values = { DESKTOP, LAPTOP, SERVER, PRINTER, NETWORK_SUPPORT, AIR_CONTITION, CCTV_CAMERA, ALL_IN_ONE_PC, HOME_CLEANING, APPLICANCE_REPAIR, ELECTRICITY, PLUMBING }; public static final int[] images = { R.drawable.desktop_pc_icon, R.drawable.laptop_pc_icon, R.drawable.server_icon, R.drawable.printer_icon, R.drawable.network_support_icon, R.drawable.aircondition_icon, R.drawable.cctv_camera_icon, R.drawable.all_in_one_pc_icon, R.drawable.home_cleaning_icon, R.drawable.appliance_repair_icon, R.drawable.electrical_icon, R.drawable.plumbing_icon }; ``` **Grid Adopter Setting(In onCreate method)** ``` final GridView mainGrid = (GridView) findViewById(R.id.main_grid); mainGrid.setAdapter(new GridAdapter(MainActivity.this, AppConstants.productsId, AppConstants.values, AppConstants.images)); ``` **Grid Adapter** ``` public class GridAdapter extends BaseAdapter { private final String[] productsId; private final String[] values; private final int[] images; private Context context; private LayoutInflater layoutInflater; public GridAdapter(final Context context, final String [] productsId, final String[] values, final int[] images) { this.context = context; this.productsId = productsId; this.values = values; this.images = images; } @Override public int getCount() { return productsId.length; } @Override public Object getItem(final int i) { return values[i]; } @Override public long getItemId(final int i) { return 0; } @Override public View getView(final int i, View view, final ViewGroup viewGroup) { layoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (view == null) { view = layoutInflater.inflate(R.layout.single_grid_item, null); //Grid Text final TextView gridText = view.findViewById(R.id.grid_text); gridText.setText(values[i]); //Grid Image final ImageView gridImage = view.findViewById(R.id.grid_image); gridImage.setImageResource(images[i]); } return view; } } ```
Change your `getView` like below, You should update the item every time when it calls `getView`, but in your case, you updated only if the view is null. So this issue happens because of reusing the view. ``` @Override public View getView(final int i, View view, final ViewGroup viewGroup) { layoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (view == null) { view = layoutInflater.inflate(R.layout.single_grid_item, null); } //Grid Text final TextView gridText = view.findViewById(R.id.grid_text); gridText.setText(values[i]); //Grid Image final ImageView gridImage = view.findViewById(R.id.grid_image); gridImage.setImageResource(images[i]); return view; } ```
Which would be a quicker (and better) tool for querying data stored in the Parquet format - Spark SQL, Athena or ElasticSearch? Am currently building an ETL pipeline, which outputs tables of data (order of ~100+ GBs) to a downstream interactive dashboard, which allows filtering the data dynamically (based on pre-defined & indexed filters). Have zeroed in on using **PySpark / Spark for the initial ETL phase**. Next, this processed data will be **summarised** (*simple counts, averages, etc.*) & **then visualised** in the interactive dashboard. Towards the interactive querying part, I was wondering which tool might work best with my structured & transactional data (stored in Parquet format) - 1. Spark SQL (in memory dynamic querying) 2. AWS Athena (Serverless SQL querying, based on Presto) 3. Elastic Search (search engine) 4. Redis (Key Value DB) Feel free to suggest alternative tools, if you know of a better option.
Based on the information you've provided, I am going to make several assumptions: 1. You are on AWS (hence [Elastic Search](https://aws.amazon.com/elasticsearch-service/) and [Athena](https://aws.amazon.com/athena/) being options). Therefore, I will steer you to AWS documentation. 2. As you have pre-defined and indexed filters, you have well ordered, structured data. Going through the options listed 1. Spark SQL - If you are already considering [Spark](https://spark.apache.org/) and you are already on AWS, then you can leverage AWS [Elastic Map Reduce](https://aws.amazon.com/emr/details/spark/). 2. AWS Athena (Serverless SQL querying, based on Presto) - Athena is a powerful tool. It lets you query data stored on S3, which is quite cost effective. However, building workflows in Athena can require a bit of work as you'll spend a lot of time managing files on S3. Historically, Athena can only produce CSV output, so it often works best as the final stage in a Big Data Pipeline. However, with support for [CTAS](https://docs.aws.amazon.com/athena/latest/ug/create-table-as.html) statements, you can now output data in multiple formats such as Parquet with multiple compression algorithms. 3. [Elastic Search](https://aws.amazon.com/elasticsearch-service/) (search engine) - Is not really a query tool, so it is likely not part of the core of this pipeline. 4. Redis (Key Value DB) - Redis is an in memory key-value data store. It is generally used to provide small bits of information to be rapidly consumed by applications in use cases such as [caching and session management](https://aws.amazon.com/elasticache/what-is-redis/). Therefore, it does not seem to fit your use case. If you want some hands on experience with Redis, I recommend [Try Redis](https://try.redis.io/). I would also look into [Amazon Redshift](https://aws.amazon.com/redshift/?sc_channel=PS&sc_campaign=acquisition_US&sc_publisher=google&sc_medium=redshift_b&sc_content=redshift_e&sc_detail=aws%20redshift&sc_category=redshift&sc_segment=96578642442&sc_matchtype=e&sc_country=US&s_kwcid=AL!4422!3!96578642442!e!!g!!aws%20redshift&ef_id=V8j-3AAAAeKgd9i5:20171228141147:s). For further reading, read [Big Data Analytics Options on AWS](https://d1.awsstatic.com/whitepapers/Big_Data_Analytics_Options_on_AWS.pdf). As @Damien\_The\_Unbeliever recommended, there will be no substitute for your own prototyping and benchmarking.
Change level logged to IPython/Jupyter notebook I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful. However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see. How do I change the level of logs that get printed to the notebook? **More info:** I've tried to set up a root logger in the notebook as follows: ``` # In notebook import logging logging.basicConfig() logger = logging.getLogger() logger.setLevel(logging.INFO) # Import the module import mymodule ``` And then at the top of my modules, I have ``` # In mymodule.py import logging logger = logging.getLogger('mypackage.' + __name__) logger.setLevel(logging.DEBUG) logger.propagate = True # Log some messages logger.debug('debug') logger.info('info') ``` When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown. **Related links:** - From [this IPython issue](https://github.com/ipython/ipython/issues/7170), it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the `ipython_notebook_config` file, only affects the internal IPython logging level. The other is the IPython logger, accessed with `get_ipython().parent.log`. - <https://github.com/ipython/ipython/issues/8282> - <https://github.com/ipython/ipython/issues/6746>
With current ipython/Jupyter versions (e.g. 6.2.1), the `logging.getLogger().handlers` list is empty after startup and `logging.getLogger().setLevel(logging.DEBUG)` has no effect, i.e. no info/debug messages are printed. Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages: ``` # workaround via specifying an invalid value first %config Application.log_level='WORKAROUND' # => fails, necessary on Fedora 27, ipython3 6.2.1 %config Application.log_level='DEBUG' import logging logging.getLogger().setLevel(logging.DEBUG) log = logging.getLogger() log.debug('Test debug') ``` For just getting the debug messages of one module (cf. the `__name__` value in that module) you can replace the above `setLevel()` call with a more specific one: ``` logging.getLogger('some.module').setLevel(logging.DEBUG) ```
Parse Server Simple Mailgun Adapter 'verifyUserEmails' issue I am using the [Parse Server Simple Mailgun Adapter](https://github.com/ParsePlatform/parse-server-simple-mailgun-adapter), and my Parse Server is working perfectly on Heroku. I am new to node.js and Express, but I installed the adapter on the root of the Parse Server via: ``` npm i parse-server-simple-mailgun-adapter ``` This created a node\_modules folder and essentially cloned the Github repository for the Mailgun Adapter. My index.js Parse Server configuration looks like: ``` var api = new ParseServer({ verifyUserEmails: true, databaseURI: databaseUri || 'mongodb://DATABASE', cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js', appId: process.env.APP_ID || 'APPID', masterKey: process.env.MASTER_KEY || 'MASTERKEY', //Add your master key here. Keep it secret! serverURL: process.env.SERVER_URL || 'https://SERVER/parse', // Don't forget to change to https if needed publicServerURL: 'https://SERVER/parse', fileKey: process.env.FILE_KEY || 'FILEKEY', push: { ios: [ { pfx: 'FILE.p12', // Dev PFX or P12 bundleId: 'BUNDLE', production: false // Dev }, { pfx: 'FILE.p12', // Prod PFX or P12 bundleId: 'BUNDLE', production: true // Prod } ] }, emailAdapter: { module: 'parse-server-simple-mailgun-adapter', options: { fromAddress: 'EMAIL@DOMAIN', domain: 'DOMAIN', apiKey: 'KEY', } }, liveQuery: { classNames: ["Posts", "Comments"] // List of classes to support for query subscriptions } }); ``` The server works perfectly when commenting out the `verifyUserEmails` key. With it, the server will not work. The Mailgun adapter does not work regardless. Any help would be greatly appreciated. Thanks!
Did you set up the email adapter? Take a look at : <https://github.com/ParsePlatform/parse-server> Email verification and password reset Verifying user email addresses and enabling password reset via email requries an email adapter. As part of the parse-server package we provide an adapter for sending email through Mailgun. To use it, sign up for Mailgun, and add this to your initialization code: ``` var server = ParseServer({ ...otherOptions, // Enable email verification verifyUserEmails: true, // The public URL of your app. // This will appear in the link that is used to verify email addresses and reset passwords. // Set the mount path as it is in serverURL publicServerURL: 'https://example.com/parse', // Your apps name. This will appear in the subject and body of the emails that are sent. appName: 'Parse App', // The email adapter emailAdapter: { module: 'parse-server-simple-mailgun-adapter', options: { // The address that your emails come from fromAddress: '[email protected]', // Your domain from mailgun.com domain: 'example.com', // Your API key from mailgun.com apiKey: 'key-mykey', } } }); ``` You can also use other email adapters contributed by the community such as parse-server-sendgrid-adapter or parse-server-mandrill-adapter. OR Create your own in cloud code using mailgun-js <https://www.npmjs.com/package/mailgun-js> ``` var api_key = '[SECRET_API_KEY]'; var domain = '[DOMAIN_HERE]'; var mailgun = require('mailgun-js')({apiKey: api_key, domain: domain}); Parse.Cloud.define('testemail', function(req, res) { var data = { from: 'Excited User <[email protected]>', to: '[email protected]', subject: 'Hello', text: 'Testing some Mailgun awesomness!' }; mailgun.messages().send(data, function (error, body) { console.log(body); }); res.success('Email Sent!'); }); ```
How to delete documents from Elasticsearch I can't find any example of deleting documents from `Elasticsearch` in Python. What I've seen by now - is definition of `delete` and `delete_by_query` functions. But for some reason [documentation](http://elasticsearch-py.readthedocs.org/en/master/api.html#elasticsearch.Elasticsearch.delete_by_query) does not provide even a microscopic example of using these functions. The single list of parameters does not tell me too much, if I do not know how to correctly feed them into the function call. So, lets say, I've just inserted one new doc like so: ``` doc = {'name':'Jacobian'} db.index(index="reestr",doc_type="some_type",body=doc) ``` Who in the world knows how can I now delete this document using `delete` and `delete_by_query` ?
Since you are not giving a document id while indexing your document, you have to get the auto-generated document id from the return value and delete according to the id. Or you can define the id yourself, try the following: ``` db.index(index="reestr",doc_type="some_type",id=1919, body=doc) db.delete(index="reestr",doc_type="some_type",id=1919) ``` In the other case, you need to look into return value; ``` r = db.index(index="reestr",doc_type="some_type", body=doc) # r = {u'_type': u'some_type', u'_id': u'AU36zuFq-fzpr_HkJSkT', u'created': True, u'_version': 1, u'_index': u'reestr'} db.delete(index="reestr",doc_type="some_type",id=r['_id']) ``` Another example for delete\_by\_query. Let's say after adding several documents with name='Jacobian', run the following to delete all documents with name='Jacobian': ``` db.delete_by_query(index='reestr',doc_type='some_type', q={'name': 'Jacobian'}) ```
WshShell.AppActivate doesn't seem to work in simple vbs script total vbs scripting newb here. I'm trying to automate closing a certain open window, namely a program called HostsMan. This is on Windows 8.1 Pro 64 bit, and this is what my script currently looks like: ``` Set WshShell = CreateObject("WScript.Shell") WshShell.AppActivate "HostsMan" WshShell.SendKeys "%{F4}" ``` The second line doesn't seem to work. I know line 3 works because it activates the Windows shutdown menu. Is there something I'm missing? Update/more info: Manually entering alt-F4 does close it, so I know this should work. I also tested this script with other open windows and they close just fine. Additionally, HostsMan is opened with Admin privileges, so I tried running the script as a task set with highest privileges to see if that would do it, and still no go. But that does work with other open windows running with Admin privileges. Frustrating!
I've tried it, too, and couldn't get it to work. There must be something about the window class, perhaps, where `AppActivate` doesn't see it as a top-level window? In any event, `AppActivate` also lets you pass the process ID instead of the window title. When I installed HostsMan, the process name was `hm.exe`, so I'll use that in my example below. ``` Set Processes = GetObject("winmgmts:").InstancesOf("Win32_Process") For Each Process In Processes If StrComp(Process.Name, "hm.exe", vbTextCompare) = 0 Then ' Activate the window using its process ID... With CreateObject("WScript.Shell") .AppActivate Process.ProcessId .SendKeys "%{F4}" End With ' We found our process. No more iteration required... Exit For End If Next ```
Trying to use templatised fuctions to swap two strings ``` #include<iostream> #include<string> template <typename T> void swap(T a , T b) { T temp = a; a = b; b = temp; } template <typename T1> void swap1(T1 a , T1 b) { T1 temp = a; a = b; b = temp; } int main() { int a = 10 , b = 20; std::string first = "hi" , last = "Bye"; swap(a,b); swap(first, last); std::cout<<"a = "<<a<<" b = "<<b<<std::endl; std::cout<<"first = "<<first<<" last = "<<last<<std::endl; int c = 50 , d = 100; std::string name = "abc" , surname = "def"; swap1(c,d); swap1(name,surname); std::cout<<"c = "<<c<<" d = "<<d<<std::endl; std::cout<<"name = "<<name<<" surname = "<<surname<<std::endl; swap(c,d); swap(name,surname); std::cout<<"c = "<<c<<" d = "<<d<<std::endl; std::cout<<"name = "<<name<<" surname = "<<surname<<std::endl; return 0; } ``` --- ``` **Output** a = 10 b = 20 first = Bye last = hi c = 50 d = 100 name = abc surname = def c = 50 d = 100 name = def surname = abc ``` --- Both `swap()` and `swap1()` basically have the same function-definitions then why only `swap()` actually swaps the strings, while `swap1()` does not? Also can you tell me that how are stl strings passed as arguments by default i.e are they passed by value or by reference?
I can see why people are frowning upon ADL now... What you see is an effect of [Argument Dependent Lookup](https://stackoverflow.com/questions/8111677/what-is-argument-dependent-lookup-aka-adl-or-koenig-lookup). If you'd add a print inside your `swap` implementation, you'd notice that it is **not** called for `std::string`, only for `int`. `std::swap` is preferred over your version, because there exists an [explicit specialization](https://en.cppreference.com/w/cpp/string/basic_string/swap2) for `std::basic_string` type. If it didn't exist, call would be ambiguous probably. For `int`, namespace `std` is not considered in the lookup process, so your version is the only acceptable. > > Also can you tell me that how are stl strings passed as arguements by default i.e are they passed by value or by reference? > > > Everything in C++ is passed by value, unless you mark it as pass-by-reference explicitly.
How do I copy-protect my Java application? I want to sell my Java application using PayPal. When a payment is received, the customer is mailed a one-time download link to my application. My question is, how do I prevent people from sending the .jar file to their friends/uploading it on the internet? Obviously I need some kind of check in the application which only allows it to run on one computer. And that's another problem, I don't want a customer to have limitations on one computer, they should be able to run it at home and work etc. Maybe some kind of cd-key would do the trick? Is there any cd-key resources for Java out there? Or should i build my own algorithm? Here comes yet another problem, reverse engineering.. Please help me solve this :)
The most common way to handle this is: customer gives you money, you generate a unique unlock key and provide that to the customer. The key should be a valid unlock key, and it should encode identification information about the user. Customer uses that key to install on as many of their computers as they like, or is allowed by the license. Don't worry about reverse-engineering. Your product, if it's at all popular, will be pirated - you'll find unlock keys online if you look hard enough. If you like, you can take that as a compliment - someone liked your software enough to steal it. Cold comfort, I know, but the alternative is to get in an arms race with the pirates, and you won't win that. In the end, a few percent of the population will steal software, and you can't do much about that. You **can** use the key's identification information, if your scheme is strong enough, to trace who released the key in the first place.
How do I use "implicit" as apply() parameter? I want to do this: ``` abstract class Context { def getInt(id: Int): Int } abstract class Dependency[+T] (val name: String, val id: Int) extends Function1[Context,T] class IntDependency(name: String, id: Int) extends Dependency[Int](name, id) { def apply(implicit context: Context): Int = context.getInt(id) } ``` But then I get an error message like this: ``` class IntDependency needs to be abstract, since method apply in trait Function1 of type (v1: Context)Long is not defined (Note that T1 does not match Context) ``` I understand that implicits should normally be part of the *second* parameter list, but I can't work out how to code it so it compiles, and gives the result I want. Explanation: I'm trying to create a framework where one can define "Function" object, which can depend on other functions to compute their value. All functions should only take a single Context parameter. The context know the "result" of the other functions. The function instances should be immutable, with the state residing in the context. I want the functions to create "dependency" fields at creation time, which take the context *implicitly*, and return the value of the dependency within that context, so that accessing the dependency inside of the apply method "feels like" accessing a parameter or field, that is without explicitly giving the context as parameter to the dependency.
Are you sure you need your `Dependency` to extend a `Function`? Because if you don't, just leave the `extends Function1[Context,T]` part out and your code will work. If you really need to extend a `Function` than I don't know of a solution in your case. But there are cases where you could try to overload the `apply` method. Like here: ``` scala> val sum = new Function1[Int, Function1[Int, Int]] { | def apply(a: Int) = (b: Int) => a + b | def apply(a: Int)(implicit b: Int) = a + b |} sum: java.lang.Object with (Int) => (Int) => Int{def apply(a:Int)(implicit b: Int): Int} = <function1> scala> sum(2)(3) res0: Int = 5 scala> implicit val b = 10 b: Int = 10 scala> sum(2) res1: Int = 12 ```
How to organise declarations and functions in code .h file and includes for best code re-use Hope someone can help - apologies for this very basic question, but I'm using standard C to write code for some experiments with PIC microcontrollers, and I'm very new to the C Language. I have various logical groups of code, such as functions to control an LCD display, that I'd like to make re-usable to PIC-based projects and would like to know how best to break-up these logical code groups for re-usability. In the example of the LCD functions, I presume I split the declarations into a header named 'lcd.h' (including in 'Header Files' project directory of my IDE) and the function definitions in an include 'lcd.c' (including in 'Source Files' project directory of my IDE) - would this be correct? What are the naming conventions for breaking up code in this way? For instance should all global declarations be in a header file named 'main.h'? Many thanks, Alex
It's good that you are focusing on modularity. Not only will it bring you re-usability, but it will also make debugging easier. # Naming convention You are correct. For LCD functions, create the files `lcd.c` and `lcd.h`. There are no universal function naming conventions, but here's what I use: `lcd.h`: ``` void LCD_PublicFunction(void); ``` `lcd.c`: ``` static void PrivateFunction(void); void LCD_PublicFunction(void) { // Function goes here } static void PrivateFunction(void) { // Function goes here } ``` I place a `LCD_` prefix in front of all my public functions in order to prevent namespace collisions and also to help me find the location of a function at any point. The `static` keyboard prevents any functions outside of `lcd.c` from seeing `PrivateFunction` and I drop the prefix to denote private. # Globals Please avoid global variables. As your project grows, it will become harder to trace the logic. Instead, use getter and setter functions. ``` static int brightness; void LCD_SetBrightness(int var) { brightness = var; } int LCD_GetBrightness(void) { return brightness; } ``` This gives you more flexibility. Perhaps you'll need to add a little logic each time the brightness is set. Perhaps you want the variable to be read-only, so you can drop the setter. # Granularity Try to break up your project as much as possible. I'm assuming you'll be using some sort of serial port to communicate with the LCD. Break up the communications firmware from the LCD display firmware. For instance, if it uses SPI, then you should create an `spi.c` and a `spi.h`. I've seen this practice go too far. I've seen people wrap functionality around all of the I/O ports such that they have functions to set digital pins high and low. Bad example: ``` void IO_PortA7 (char val) { LATAbits.LATA7 = val; } ``` I haven't really gained anything here other than adding some syntactic sugar. Simply use `LATAbits.LATA7` in the code since it's the standard way to turn I/O on and off on a PIC. Good example: ``` void FX_SetBuzzer (char is_active) { LATAbits.LATA7 = is_active; } ``` Just by reading the code, you can tell that I've connected a buzzer to pin A7. Furthermore, the rest of the code doesn't care how I've connected the buzzer, and there's only one change I have to make if I have to move the buzzer to a different pin. Finally, by using a variable name of `is_active`, I document the fact that the buzzer is active-high. I try to use questions for all boolean variable to remember what happens in the true condition. # Testing Finally piece of advice. In each of your `.c` files, create a test harness. ``` #ifdef LCD_TEST int main(void) { // Enable LCD communication. LCD_Init(); // Display friendly greeting. LCD_Display("Hello, world!"); // Wait for power-down. for(;;); } #endif ``` This way, you can build a tiny program that tests your LCD by itself. It serves several purposes. - It follows [Test-driven\_development](http://en.wikipedia.org/wiki/Test-driven_development), which is a Good Thing. - It provides a basic form of documentation by showing a functional example. - In the future, your LCD might suddenly stop working. Simply invoke this code and see what happens. If the test works, you know your latest changes broke the LCD functionality somehow. If it doesn't, you know the problem is with the LCD itself or the connections between it and the PIC. Good luck!
View full program command line arguments in OOM killer logs Is it possible to view full program command line arguments in OOM killler logs ? What I see now in /var/log/syslog is ``` Memory cgroup out of memory: Kill process 29187 (beam.smp) score 998 or sacrifice child Killed process 29302 (cpu_sup) total-vm:4300kB, anon-rss:76kB, file-rss:272kB beam.smp invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0 beam.smp cpuset=/ mems_allowed=0-3 ``` I have plenty of beam.smp processes on my machine and it's not very convenient to find out what particular process was killed by OOM killer.
``` echo 1 > /proc/sys/vm/oom_dump_tasks ``` which seems about the max that you can get the kernel to display on out-of-memory errors. <https://www.kernel.org/doc/Documentation/sysctl/vm.txt> > > Enables a system-wide task dump (excluding kernel threads) to be > produced when the kernel performs an OOM-killing and includes such > information as pid, uid, tgid, vm size, rss, nr\_ptes, swapents, > oom\_score\_adj score, and name. This is helpful to determine why the > OOM killer was invoked, to identify the rogue task that caused it, and > to determine why the OOM killer chose the task it did to kill. > > > If this is set to zero, this information is suppressed. On very large > systems with thousands of tasks it may not be feasible to dump the > memory state information for each one. Such systems should not be > forced to incur a performance penalty in OOM conditions when the > information may not be desired. > > > If this is set to non-zero, this information is shown whenever the OOM > killer actually kills a memory-hogging task. > > >
How should I reason when I have to choose between a class, struct and enum in Swift? Since classes, structs and enums all has constructors, properties and computed properties, how should I reason when choosing between them?
ChristopheD's and Jack Wu's answers are good, but I feel they don't touch on enums, or miss their importance. Swift enums are (meant to be) a full implementation of algebraic data types. Classes and structs are traditionally used to model data in object-oriented languages, but enums are usually limited to being used as a convenient way to limit the value of a variable to a limited number of possibilities. E.g. (C++): ``` enum MaritalStatus { Unmarried, Married, Divorced, WidowedOrWidowered }; MaritalStatus m = Unmarried; ``` Swift enums can do the above but they can do a lot more. Of course the Language Guide has a pretty good [barcode modelling example](https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/Swift_Programming_Language/Enumerations.html#//apple_ref/doc/uid/TP40014097-CH12-XID_189) but the best example I know of that really drives home the point of modelling data with algebraic data types is Scott Wlaschin's presentation: <http://www.slideshare.net/ScottWlaschin/ddd-with-fsharptypesystemlondonndc2013> You would probably benefit from going through the whole presentation but really to 'get' the point all you need to see is slide 60 where he shows how to model a 'payment method' in a typical line-of-business app. The examples in the presentation are in F# but F# isn't *that* far off from Swift and you can pretty easily map between them. E.g., the payment methods enum in Swift would look like: ``` enum PaymentMethod { case cash // No extra data needed. case cheque(Int) // Cheque #. case card(CardType, CardNumber) // 2 pieces of extra data. } ``` The point of the above is that each order's payment method can be only one of the above three methods. Anything else will not be allowed by the compiler. This is a very succinct alternative to building entire class hierarchies to model these almost trivial things. The presentation really takes off from there and the best part is Swift can do almost everything that F# can in terms of data modelling, using optional types, etc.
Which Grub to use for a custom portable boot image? I would like to make a bootable USB/Floopy/LiveCD with linux kernel and Grub. After booting to that USB/Floopy/LiveCD using VirtualBox or directly, it will show my own customized Grub screen and then it will execute my C or Pascal application. I was trying to [download grub](ftp://alpha.gnu.org/gnu/grub/) but I am not sure which one I should use. Is there any issue to download the correct version of Grub such as for 32-Bit or 64-Bit downloads are different? Which Grub should I download to get started with my own customized bootable image?
There are only two versions of grub listed there, the 1x series (most recent being 0.97) and the 2x series (most recent being 1.99). Both can be customized and used for your purpose. The 1x series has more standard compatibility with old hardware and distros, but we the 2x series is coming along nicly and many major distros are switching to it. 32bit vs 64 bit architecture is not a consideration for grub at this stage of the boot process, that won't come into play until you launch a kernel. Since grub doesn't do much it's happy to run on a generic set of cpu instructions. But really you shouldn't be starting with grub and working up form there ... that will be a long road. You should probably start with some already arranged livecd image and work backwards to pare it down to just run your program on boot. This will save you all kinds of trouble. Pick some lightweight livecd that you like and get it's source, then start stripping out the bits you don't need and adding your program.
RMarkdown. How to reduce space between title block and start of body text I've been using RMarkdown via RStudio on a Mac successfully. Recently upgraded to RStudio 1.2.5019 and tinytex\_0.18 and now the vertical spaceing between my "title block" and "first body text / heading" has increased. Simple example, (deliberately excluding `author:` and `date:`), is: ``` --- output: pdf_document title: "Example of Title to Body Text" subtitle: Subtitle Places Here --- This is the first paragraph (or heading if specified as such). It is quite a way down from the title of the document. How can this be reduced to a "single line" vertical space? Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt. ``` I then tried to use the "titlesec" package, but haven't been able to work out which command to use to achieve the desired outcome. Example of attempt is: ``` --- output: pdf_document subparagraph: yes header-includes: | \usepackage{titlesec} \titlespacing{\title}{0pt}{\parskip}{-\parskip} title: "Example of Title to Body Text" subtitle: Subtitle Places Here --- This is the first paragraph (or heading if specified as such). It is quite a way down from the title of the document. How can this be reduced to a "single line" vertical space? Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt. ``` This is what it currently looks like rendered to PDF. [![Current output](https://i.stack.imgur.com/y76dd.jpg)](https://i.stack.imgur.com/y76dd.jpg) This is what I would like the PDF to look more like (edit from graphics program). [![Desired output](https://i.stack.imgur.com/qU00F.png)](https://i.stack.imgur.com/qU00F.png) So, how can I reduce this vertical spacing between the title block and the start of the document's body? Thanks in advance.
You can reduce the space between the (sub)title and the first paragraph by adding `\vspace{}` command from LaTeX right before the first paragraph. ``` --- output: pdf_document subparagraph: yes header-includes: | \usepackage{titlesec} \titlespacing{\title}{0pt}{\parskip}{-\parskip} title: "Example of Title to Body Text" subtitle: Subtitle Places Here --- \vspace{-5truemm} This is the first paragraph (or heading if specified as such). It is quite a way down from the title of the document. How can this be reduced to a "single line" vertical space? Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt. ```
How to parse a string into a java.sql.date Hi i am trying to parse a string into a java.sql.date Here is what i am doing ``` private static SimpleDateFormat sdfout = new SimpleDateFormat("yyyy.MM.dd.HH.mm"); try{ String date = "2010.09.30.13.18"; task.setDueDate(new java.sql.Date(sdfout.parse(date).getTime())); } ``` The problem is that i only get the date back. Not the time. Am i doing this correctly
The code logic is fine, you only need [`java.sql.Timestamp`](http://download.oracle.com/javase/6/docs/api/java/sql/Timestamp.html) instead of [`java.sql.Date`](http://download.oracle.com/javase/6/docs/api/java/sql/Date.html). The SQL timestamp represents both the date and time parts, like date in Java. The SQL date represents only the date part. ### See also: - [Handling timestamps in MySQL using JDBC](https://stackoverflow.com/questions/3323618/handling-mysql-datetimes-and-timestamps-in-java/3323870#3323870) --- Noted should be that you should prefer [`java.util.Date`](http://download.oracle.com/javase/6/docs/api/java/util/Date.html) over `java.sql.Timestamp` in the model objects and use the `Timestamp` only at the very moment when you're about to set it in a SQL query. This separates the model objects from the JDBC API and improves their portability and reusability.
Learning Basic rails: Where to put conditional business logic? So Im trying to teach myself rails and having some trouble with figuring out where logic goes. In my exercise, I have a payment model. class pyament Integer Product\_Type String Product\_name There are rules for handling payments if the product\_Type is physical, do this, if virtual do that if the product\_name is book, do this if the product\_name is cow, do that What I cant figure out is where to put these rules. Do I make a method in the model called process that runs these rules? Does this go logic go in the controller? Im just not clear on this. Any insight would be appreciated. Thanks
You should definitely keep this logic in the model, and in fact, if the logic is significantly different between different types you should use multiple models with Single Table Inheritance. See: <http://joemcglynn.wordpress.com/2009/12/11/rails-sti-part-1/> <http://joemcglynn.wordpress.com/2009/12/12/rails-single-table-inheritance-part-2/> Basically the idea is this: You're already defining Product Type -- the 'type' column is the main feature of an STI table. With STI instead of having one model with tons and tons of conditional logic or multiple models, you have several VERY SIMILAR models with VERY SIMILAR data but somewhat different logic, so all those related models can share the same table, and inherit from a common class. For instance: ``` class Product < ActiveRecord::Base ... common logic goes here ... end class PhysicalProduct < Product ... physical-product-specific logic goes here ... end class VirtualProduct < Product ... virtual-product-specific logic goes here ... end ``` So, in this way you can create a method like `product.deliver` which is defined by default in the product model to trigger shipping a product -- but in the VirtualProduct model is overridden to trigger emailing a download link instead. ActiveRecord handles all of this very nicely (see the linked articles above for a walkthrough), and the majority of your forms and links and controllers etc. will continue to work the same way they currently do. **In general you always want to keep as much logic as possible in the models instead of the controllers, because models are easier to test, and easier to debug.** In your situation STI is a nice way to keep this branching logic in the models an out of the controllers and views.
Save an image (only content, without axes or anything else) to a file using Matloptlib I'd like to obtain a spectrogram out of a wav file and then save it to a png, but I need just the content of the image (not axes or anything else). I came across these questions [Matplotlib plots: removing axis, legends and white spaces](https://stackoverflow.com/questions/9295026/matplotlib-plots-removing-axis-legends-and-white-spaces) [scipy: savefig without frames, axes, only content](https://stackoverflow.com/questions/8218608/scipy-savefig-without-frames-axes-only-content) I've also read [the Matplotlib documentation](http://matplotlib.org/api/_as_gen/matplotlib.pyplot.savefig.html#matplotlib.pyplot.savefig) but it seems useless and so either answers to questions above are outdated or I'm doing something wrong because simple `plt.savefig('out.png', bbox_inches='tight', pad_inches=0)` does not do what I want to achieve. Initially I tried to follow [this guide](http://www.frank-zalkow.de/en/code-snippets/create-audio-spectrograms-with-python.html?i=1) but the code crashes. Then I tried [this approach](https://vkolachalama.blogspot.com/2016/08/convert-wav-file-to-spectrogram-in.html), but since it's outdated I modified it a little: ``` import matplotlib.pyplot as plt from scipy.io import wavfile import numpy as np def graph_spectrogram(wav_file): rate, data = wavfile.read(wav_file) pxx, freqs, bins, im = plt.specgram(x=data, Fs=rate, noverlap=384, NFFT=512) plt.axis('off') plt.savefig('sp_xyz.png', bbox_inches='tight', dpi=300, frameon='false') if __name__ == '__main__': # Main function graph_spectrogram('...') ``` This is what I got: [![enter image description here](https://i.stack.imgur.com/abiCC.png)](https://i.stack.imgur.com/abiCC.png) Maybe it's not visible, but there's a white border around the content (from the biggest to the smallest): left, bottom, top, right. I want the same image but just the content without anything else. How can I achieve that? I use python 3.6 and Matplotlib 2.0.2.
I think you want [`subplots_adjust`](http://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.subplots_adjust.html): ``` fig,ax = plt.subplots(1) fig.subplots_adjust(left=0,right=1,bottom=0,top=1) ax.axis('tight') ax.axis('off') ``` In this case: ``` import matplotlib.pyplot as plt from scipy.io import wavfile import numpy as np def graph_spectrogram(wav_file): rate, data = wavfile.read(wav_file) fig,ax = plt.subplots(1) fig.subplots_adjust(left=0,right=1,bottom=0,top=1) ax.axis('off') pxx, freqs, bins, im = ax.specgram(x=data, Fs=rate, noverlap=384, NFFT=512) ax.axis('off') fig.savefig('sp_xyz.png', dpi=300, frameon='false') if __name__ == '__main__': # Main function graph_spectrogram('...') ```
Plot Confusion Matrix with scikit-learn without a Classifier I have a confusion matrix created with `sklearn.metrics.confusion_matrix`. Now, I would like to plot it with `sklearn.metrics.plot_confusion_matrix`, but the first parameter is the trained classifier, as specified in the [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html#sklearn.metrics.plot_confusion_matrix). The problem is that I don't have a classifier; the results were obtained doing manual calculations. Is it still possible to plot the confusion matrix in one line via scikit-learn, or do I have to code it myself with matplotlib?
The fact that you can import `plot_confusion_matrix` directly suggests that you have the latest version of scikit-learn (0.22) installed. So you can just look at the source code of `plot_confusion_matrix()` to see how its using the `estimator`. From the [latest sources here](https://github.com/scikit-learn/scikit-learn/blob/5f3c3f037/sklearn/metrics/_plot/confusion_matrix.py#L120), the estimator is used for: 1. computing confusion matrix using `confusion_matrix` 2. getting the labels (unique values of y which correspond to 0,1,2.. in the confusion matrix) So if you have those two things already, you just need the below part: ``` import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels) # NOTE: Fill all variables here with default values of the plot_confusion_matrix disp = disp.plot(include_values=include_values, cmap=cmap, ax=ax, xticks_rotation=xticks_rotation) plt.show() ``` Do look at the NOTE in comment. For older versions, you can look at how the matplotlib part is coded [here](https://scikit-learn.org/0.21/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py)
Street View Map with Google Maps Using Text Address I have a google map on my wordpress single post page that grabs the address from 2 custom fields. It works fine, but now I'm trying to add a street view link/option. I have on my page -- ``` <iframe width="100%" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" src="https://www.google.com/maps/embed/v1/place?q=<?php echo $add; ?>,%20<?php $terms = wp_get_post_terms(get_the_ID(), 'city-type'); if ( ! empty( $terms ) && ! is_wp_error( $terms ) ){ foreach ( $terms as $term ) { if ($term->parent == 0) //check for parent terms only echo '' . $term->name . ''; } } ?>&zoom=17&key=mytoken"></iframe> ``` Which will then output something like this -- ``` <iframe width="100%" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" src="https://www.google.com/maps/embed/v1/place?q=100 las vegas ave,%20Las Vegas, NV&amp;zoom=17&amp;key=mytoken"></iframe> ``` Is there a way to add street view without using coordinates? I tried getting the coordinates but they were slightly off -- ``` <?php function getCoordinates($address){ $address = str_replace(" ", "+", $address); // replace all the white space with "+" sign to match with google search pattern $url = "https://maps.google.com/maps/api/geocode/json?sensor=false&address=$address"; $response = file_get_contents($url); $json = json_decode($response,TRUE); //generate array object from the response from the web return ($json['results'][0]['geometry']['location']['lat'].",".$json['results'][0]['geometry']['location']['lng']); } $terms = wp_get_post_terms(get_the_ID(), 'city-type'); if ( ! empty( $terms ) && ! is_wp_error( $terms ) ){ foreach ( $terms as $term ) { if ($term->parent == 0) //check for parent terms only echo getCoordinates($add, $term->name, $property_pin); } } else { echo getCoordinates($add, $term->name, $property_pin); } ?> ``` I'm already using geocode to try and get the coordinates before hand. For example the geocode gives me these coordinates -- **34.0229995,-118.4931421** but the coordinates I'm looking for is -- **34.050217,-118.259491**
Ok, I figured it out. I used the code from my question to get the coordinates of the address -- ``` <?php // FUNCTION TO CONVERT ADDRESSES INTO COORDINATES function getCoordinates($address){ $address = str_replace(" ", "+", $address); // replace all the white space with "+" sign to match with google search pattern $url = "https://maps.googleapis.com/maps/api/geocode/json?address=$address&key=YOUR_TOKEN_KEY"; $response = file_get_contents($url); $json = json_decode($response,TRUE); //generate array object from the response from the web return ($json['results'][0]['geometry']['location']['lat'].",".$json['results'][0]['geometry']['location']['lng']); } // THIS GETS MY TOP TERM FROM MY CUSTOM TAXONOMY I.E. NEW YORK, NY $terms = wp_get_post_terms(get_the_ID(), 'city-type'); if ( ! empty( $terms ) && ! is_wp_error( $terms ) ){ foreach ( $terms as $term ) { if ($term->parent == 0) //check for parent terms only // $ADD IS FROM MY CUSTOM FIELD FOR THE ADDRESS I.E. 1460 BROADWAY $the_address = getCoordinates($add, $term->name); } }; ?> ``` Then I simply used the following google embed code (replace token key with your own) --- ``` <iframe width="100%" height="350" frameborder="0" style="border:0" src="https://www.google.com/maps/embed/v1/streetview?location=<?php echo $the_address; ?>&heading=165&pitch=0&key=YOUR-TOKEN-KEY" allowfullscreen></iframe> ``` That's for adding the street view map, I wanted both though so what I did was create two divs, one for each map and then just use click functions to show/hide them -- ``` jQuery(document).ready(function(){ $("#sv-link").click(function(){ $("#rv-box").slideToggle("fast"); $("#sv-box").slideToggle(); }); $("#rv-link").click(function(){ $("#sv-box").slideToggle("fast"); $("#rv-box").slideToggle(); }); }); ``` I know it's probably not the best solution and I can probably dig deeper but this is all I needed. I did run into a issue with a couple addresses that had multiple images/perspectives for one location. I'm trying to figure that out, times square footlocker using the address 1460 broadway, is a perfect example. Besides that it seems to work fine. [Google Maps](https://developers.google.com/maps/documentation/embed/start)
Call R functions from sqldf queries Is there a way to call R functions from sqldf queries? E.g. ``` sqldf("select paste('Hello', 'World')") ``` Or, is there a way to define custom functions or stored procedures within the SQLite engine behind sqldf? (I am using sqldf with plain old in-memory R data frames; I'm not connecting to any actual databases.)
*1) Existing Functions* First make sure that the function you want is not already available. For example the code in the question is directly supported in SQL already: ``` > sqldf("select 'Hello' || ' ' || 'world' ") 'Hello' || ' ' || 'world' 1 Hello world ``` *2) RSQLite.extfuns* One has all the SQL functions from sqlite's version of SQL plus a large number of user defined functions out of the box listed in `?initExtension` in the RSQLite package. *3) Other Loadable Extensions* Functions in any existing sqlite loadable extensions can be loaded via the sqlite SQL function `load_extension()`. e.g. see [these extensions](http://www.gaia-gis.it/gaia-sins/) *4) Custom functions* Custom functions can be added to SQLite but they must be written [in C](http://www.sqlite.org/c3ref/create_function.html). *5) PostgreSQL & sqldf* sqldf supports not just sqlite but also h2, postgresql and mysql. postgresql is particularly powerful in this respect. See [this link from the postgresql documentation](http://www.postgresql.org/docs/8.0/static/xfunc.html) Also see [Pl/R](http://bunsen.credativ.com/~jco/2011/plr-PostgresOpen-2011.pdf) and [R Embedded Postgres package](http://www.omegahat.org/RSPostgres/). *6) H2 & sqldf* The H2 database is supported by sqldf. Like sqlite H2 is included right in the RH2 driver R package so you don't have to install a separate database; however, you do have to install Java. It has a builtin SHA256 hash function (called hash). *7) mix sqldf & R* sql and R can be mixed like this: ``` library(digest) transform(sqldf("select * from BOD"), digest = sapply(demand, digest)) ``` *8) Other* See this [SO question and answers](https://stackoverflow.com/questions/3179021/sha1-hashing-in-sqlite-how) *UPDATE*: Added info on H2.
How to make page scroll to top after submit button is pressed in google form? I have used google form on my website. I am having an issue when submit button is pressed, the page doesn't go to top and shows alot of blank white space as we have to go down alot in submitting the form. Here's the form i used : `<iframe src="https://docs.google.com/forms/d/e/1FAIpQLSc2zeeSoq25ZdSkWK5bY5uwp6NJ4r-PdzILlXAeuszyVwQlMA/viewform?embedded=true" width="100%" height="1800px" frameborder="0" marginheight="0" marginwidth="0">Loading…</iframe>` Used this code but it doesn't work. `<form onsubmit="parent.scrollTo(0, 0); return true">` NOTE: I don't have knowledge of JS or Jquery so i maybe needing some detailed explaination. Thankyou
There are two options that you can use. 1. Add a jquery script to the iframe load event ``` <script> jQuery("iframe").load(function() { jQuery("html,body").animate({ scrollTop: 0 }, "slow"); }); </script> ``` 2. Add an onload event to the iframe; `onload="window.parent.scrollTo(0,0)"` ``` < iframe src = "https://docs.google.com/forms/d/e/1FAIpQLSc2zeeSoq25ZdSkWK5bY5uwp6NJ4r-PdzILlXAeuszyVwQlMA/viewform?embedded=true" width = "100%" height = "1800px" frameborder = "0" marginheight = "0" marginwidth = "0" onload = "window.parent.scrollTo(0,0)" > Loading… </iframe> ``` > > The second option is recommended as it will work on forms with > multiple pages. > > >
How is testing the registry pattern or singleton hard in PHP? Why is testing [singletons](http://en.wikipedia.org/wiki/Singleton_pattern) or registry pattern hard in a language like PHP which is request driven? You can write and run tests aside from the actual program execution, so that you are free to affect the global state of the program and run some tear downs and initialization per each test function to get it to the same state for each test. Am I missing something?
While it's true that *"you can write and run tests aside of the actual program execution so that you are free to affect the global state of the program and run some tear downs and initialization per each test function to get it to the same state for each test."*, it is tedious to do so. You want to test the TestSubject in isolation and not spend time recreating a working environment. ### Example ``` class MyTestSubject { protected $registry; public function __construct() { $this->registry = Registry::getInstance(); } public function foo($id) { return $this->doSomethingWithResults( $registry->get('MyActiveRecord')->findById($id) ); } } ``` To get this working you have to have the concrete `Registry`. It's hardcoded, and it's a Singleton. The latter means to prevent any side-effects from a previous test. It has to be reset for each test you will run on MyTestSubject. You could add a `Registry::reset()` method and call that in `setup()`, but adding a method just for being able to test seems ugly. Let's assume you need this method anyway, so you end up with ``` public function setup() { Registry::reset(); $this->testSubject = new MyTestSubject; } ``` Now you still don't have the 'MyActiveRecord' object it is supposed to return in `foo`. Because you like Registry, your MyActiveRecord actually looks like this ``` class MyActiveRecord { protected $db; public function __construct() { $registry = Registry::getInstance(); $this->db = $registry->get('db'); } public function findById($id) { … } } ``` There is another call to Registry in the constructor of MyActiveRecord. You test has to make sure it contains something, otherwise the test will fail. Of course, our database class is a Singleton as well and needs to be reset between tests. Doh! ``` public function setup() { Registry::reset(); Db::reset(); Registry::set('db', Db::getInstance('host', 'user', 'pass', 'db')); Registry::set('MyActiveRecord', new MyActiveRecord); $this->testSubject = new MyTestSubject; } ``` So with those finally set up, you can do your test ``` public function testFooDoesSomethingToQueryResults() { $this->assertSame('expectedResult', $this->testSubject->findById(1)); } ``` and realize you have yet another dependency: your physical test database wasn't setup yet. While you were setting up the test database and filled it with data, your boss came along and told you that you are going [SOA](https://en.wikipedia.org/wiki/Service-oriented_architecture) now and all these database calls have to be replaced with [Web service](https://en.wikipedia.org/wiki/Web_service) calls. There is a new class `MyWebService` for that, and you have to make MyActiveRecord use that instead. Great, just what you needed. Now you have to change all the tests that use the database. Dammit, you think. All that crap just to make sure that `doSomethingWithResults` works as expected? `MyTestSubject` doesn't really care where the data comes from. ### Introducing mocks The good news is, you can indeed replace all the dependencies by stubbing or [mock](https://en.wikipedia.org/wiki/Mock_object) them. A test double will pretend to be the real thing. ``` $mock = $this->getMock('MyWebservice'); $mock->expects($this->once()) ->method('findById') ->with($this->equalTo(1)) ->will($this->returnValue('Expected Unprocessed Data')); ``` This will create a double for a Web service that **expects** to be called **once** during the test **with** the first argument to **method** `findById` being 1. It **will** return predefined data. After you put that in a method in your TestCase, your `setup` becomes ``` public function setup() { Registry::reset(); Registry::set('MyWebservice', $this->getWebserviceMock()); $this->testSubject = new MyTestSubject; } ``` Great. You no longer have to bother about setting up a real environment now. Well, except for the Registry. How about mocking that too. But how to do that. It's hardcoded so there is no way to replace at test runtime. Crap! But wait a second, didn't we just say MyTestClass doesn't care where the data comes from? Yes, it just cares that it can call the `findById` method. You hopefully think now: why is the Registry in there at all? And right you are. Let's change the whole thing to ``` class MyTestSubject { protected $finder; public function __construct(Finder $finder) { $this->finder = $finder; } public function foo($id) { return $this->doSomethingWithResults( $this->finder->findById($id) ); } } ``` Byebye Registry. We are now injecting the dependency MyWebSe… err… Finder?! Yeah. We just care about the method `findById`, so we are using an interface now ``` interface Finder { public function findById($id); } ``` Don't forget to change the mock accordingly ``` $mock = $this->getMock('Finder'); $mock->expects($this->once()) ->method('findById') ->with($this->equalTo(1)) ->will($this->returnValue('Expected Unprocessed Data')); ``` and setup() becomes ``` public function setup() { $this->testSubject = new MyTestSubject($this->getFinderMock()); } ``` Voila! Nice and easy and. We can concentrate on testing MyTestClass now. While you were doing that, your boss called again and said he wants you to switch back to a database because SOA is really just a buzzword used by overpriced consultants to make you feel enterprisey. This time you don't worry though, because you don't have to change your tests again. They no longer depend on the environment. Of course, you still you have to make sure that both MyWebservice and MyActiveRecord implement the Finder interface for your actual code, but since we assumed them to already have these methods, it's just a matter of slapping `implements Finder` on the class. And that's it. Hope that helped. ### Additional Resources: You can find additional information about other drawbacks when testing Singletons and dealing with global state in - *[Testing Code That Uses Singletons](http://sebastian-bergmann.de/archives/882-Testing-Code-That-Uses-Singletons.html)* This should be of most interest, because it is by the author of PHPUnit and explains the difficulties with actual examples in PHPUnit. Also of interest are: - *[TotT: Using Dependency Injection to Avoid Singletons](http://googletesting.blogspot.com/2008/05/tott-using-dependancy-injection-to.html)* - *[Singletons are Pathological Liars](http://misko.hevery.com/2008/08/17/singletons-are-pathological-liars/)* - *[Flaw: Brittle Global State & Singletons](http://misko.hevery.com/code-reviewers-guide/flaw-brittle-global-state-singletons/)*
Displaying elements other than fullscreen element (HTML5 fullscreen API) When I "fullscreen" an element (let's say a div), I can't get anything other elements to appear (while in fullscreen mode). Why is that happening? How can I accomplish this? Related: [Is there a way to overlay a <canvas> over a fullscreen HTML5 <video>?](https://stackoverflow.com/questions/9461453/is-there-a-way-to-overlay-a-canvas-over-a-fullscreen-html5-video)
It seems that browsers (Chrome 28, Firefox 23) set the `z-index` of fullscreened elements to 2147483647 (the largest 32-bit signed number). According to tests, setting the `z-index` of other elements to the same `z-index` will cause them to show, but the `z-index` of the fullscreened element can not be changed (it can, but the browser just ignores the value — even with !important). Maybe the only reference I could find to this: <https://github.com/WebKit/webkit/blob/master/LayoutTests/fullscreen/full-screen-zIndex.html> Also, in Chrome dev tools: ![Computed style on fullscreened element](https://i.stack.imgur.com/NffKp.png) So either set elements to the maximum `z-index`, or, the better solution would be to just create a container element, make it so that all elements you want to display are children of the container element, and fullscreen it.
iOS: Xcode 4.2 and Navigation Controller In XCode 4.2 when I select "new project" and also select "single view application" but now I want to add a navigation controller. What can I do in Xcode 4.2 to do it? (without storyboard)
Unless you are adding the UINavigationController to another UIViewController that is utilized for a different method of navigation, i.e. UISplitViewController or UITabBarController, I would recommend adding the UINavigationController to your application window in the AppDelegate then add the UIViewController that has your view in it. If you are adding the UINavigationController as your main UIViewController, you can easily do this programmatically in the following method in the AppDelegate: ``` - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions; ``` The code I would add is: ``` UINavigationController *navcon = [[UINavigationController alloc] init]; [navcon pushViewController:self.viewController animated:NO]; self.window.rootViewController = navcon; ``` Now, in your **AppDelegate.m** it should look like this: ``` - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { self.window = [[[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]] autorelease]; // Override point for customization after application launch. if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) { self.viewController = [[[ViewController alloc] initWithNibName:@"ViewController_iPhone" bundle:nil] autorelease]; } else { self.viewController = [[[ViewController alloc] initWithNibName:@"ViewController_iPad" bundle:nil] autorelease]; } UINavigationController *navcon = [[UINavigationController alloc] init]; [navcon pushViewController:self.viewController animated:NO]; self.window.rootViewController = navcon; [self.window makeKeyAndVisible]; return YES; } ``` You can further learn how to utilize the UINavigationController by checking out the [UINavigationController Apple Documentation](http://developer.apple.com/library/ios/#documentation/uikit/reference/UINavigationController_Class/Reference/Reference.html) and their example projects, which you can download from the same documentation page. The example projects will help you grasp the various ways you can utilize the UINavigationController.
CSS shorthand for positioning There are any shorthand for `top` `right` `bottom` `left` or for `width` and `height` ? I have a lot of css like this ``` #topDiv { position:absolute; top:0px; left:0px; right:0px; height:100px; } #centerDiv { position:absolute; top:100px; bottom:120px; left:0px; right:0px; } #consoleDiv { position:absolute; left:0px; right:0px; bottom:0px; height:120px; } ``` I would like to do anything like this ``` position: absolute 10px 50px 50px 100px; ``` or ``` size: 400px 200px; ```
**2021 Update**: The [`inset` property](https://developer.mozilla.org/en-US/docs/Web/CSS/inset) is currently gaining adoption. This property uses the same multi-value syntax as the shorthand `margin` property. For browser compatibility, please see [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/inset#browser_compatibility). --- No short-hand exists to combine all of these values. These are all *different* properties, unlike, for instance `background`, which has colors, images, positions and repeat instructions and as such *can* be coalesced into a short-hand form. If you really wanted this type of control, you could use something like SASS and create a [mixin](http://sass-lang.com/docs/yardoc/file.SASS_REFERENCE.html#mixins).
How to validate date if is the last day of the month with javascript? How to validate user input date is the last day of the month using javascript?
(**Update**: See the final example at the bottom, but the rest is left as background.) You can add a day to the `Date` instance and see if the month changes (because JavaScript's `Date` object fixes up invalid day-of-month values intelligently), e.g.: ``` function isLastDay(dt) { var test = new Date(dt.getTime()), month = test.getMonth(); test.setDate(test.getDate() + 1); return test.getMonth() !== month; } ``` [Gratuitous live example](http://jsbin.com/emoto4) ...or as [paxdiablo](https://stackoverflow.com/users/14860/paxdiablo) pointed out, you can check the resulting day-of-month, which is probably faster (one fewer function call) and is definitely a bit shorter: ``` function isLastDay(dt) { var test = new Date(dt.getTime()); test.setDate(test.getDate() + 1); return test.getDate() === 1; } ``` [Another gratuitous live example](http://jsbin.com/emoto4/2) You could embed more logic in there to avoid creating the temporary date object if you liked since it's *really* only needed in February and the rest is just a table lookup, but the advantage of both of the above is that they defer *all* date math to the JavaScript engine. Creating the object is not going to be expensive enough to worry about. --- ...and finally: Since the [JavaScript specification](http://www.ecma-international.org/publications/standards/Ecma-262.htm) requires (Section 15.9.1.1) that a day is *exactly* 86,400,000 milliseconds long (when in reality days [vary in length](http://en.wikipedia.org/wiki/Day#Leap_seconds) a bit), we can make the above even shorter by adding the day as we : ``` function isLastDay(dt) { return new Date(dt.getTime() + 86400000).getDate() === 1; } ``` [Final gratuitous example](http://jsbin.com/emoto4/4)
Authenticating a user with Instagram on iOS: specifying redirect\_uri I am developing an iOS app (using Swift) that allows the user to authenticate through Instagram using OAuth 2.0 In the past, everything was working fine as I was able to specify the authorization URL as such: `https://api.instagram.com/oauth/authorize/?client_id=xxx&redirect_uri=myiosapp://authorize&response_type=code` The key point here being the redirect\_uri `myiosapp://authorize` My problem is that I am no longer able to register a custom url scheme with Instagram thereby making it impossible(?) to handle the redirect exclusively through my app. If I do try to add such a URI in the "Valid redirect URIs:" field, I get the following error: `You must enter an absolute URI that starts with http:// or https://` What is the recommended way to handle authentication with Instagram exclusively thrugh an iOS native application?
After figuring it out, I thought I'd post my solution for anyone who comes across the same problem. First of all, I'll just accept that Instagram no longer allows custom schemas in the "Security" -> "Valid redirect URIs" field. Instead, I will enter an arbitrary but valid URI that I can uniquely identify. For example: `http://www.mywebsite.com/instagram_auth_ios` Now, when attempting to authorize with Instagram, I'll use that as the redirect URI - even though no webpage actually exists at that URI. Example: `https://api.instagram.com/oauth/authorize/?client_id=xxx&redirect_uri=http://www.mywebsite.com/instagram_auth_ios&response_type=code` Finally, I'll use the `UIWebViewDelegate`'s `shouldStartLoadWithRequest` method to intercept the redirect request before it runs, and instead call my original custom uri (that way I don't have to rewrite anything). Here's how I wrote that method: ``` func webView(webView: UIWebView, shouldStartLoadWithRequest request: NSURLRequest, navigationType: UIWebViewNavigationType) -> Bool { guard let url = request.URL where url.host == "www.mywebsite.com" && url.path == "/instagram_auth_ios" else { return true } guard let authUrl = NSURLComponents(URL: url, resolvingAgainstBaseURL: false) else { return true } // Customize the scheme/host/path, etc. as desired for your app authUrl.scheme = "myappschema" authUrl.host = "instagram" authUrl.path = "" UIApplication.sharedApplication().openURL(authUrl.URL!) return false } ``` There's one small caveat with returning false in the shouldStartLoadWithRequest method in that it will always complain with a "Frame Load Interrupted" error. This doesn't seem to adversely affect anything and can (probably) be safely ignored.
What is the proper way to capture a HTTP response with Puppeteer? I am trying to capture the http response status from a user sign-up. My code looks like this: ``` it.only('returns a 400 response if email is taken', async () => { await page.goto(`${process.env.DOMAIN}/sign-up`) await page.waitFor('input[id="Full Name"]') await page.type('input[id="Full Name"]', 'Luke Skywalker') await page.type('input[id="Email"]', '[email protected]') await page.type('input[id="Password"]', 'LukeSkywalker123', {delay: 100}) await page.click('input[type="submit"]', {delay: 1000}) const response = await page.on('response', response => response) console.log('request status', response.status) // expect(response).toEqual(400) }) ``` The docs give an example of intercepting the request and doing things with it: ``` await page.setRequestInterception(true); page.on('request', request => { request.respond({ status: 404, contentType: 'text/plain', body: 'Not Found!' }); }); ``` And I have tried a similar pattern to no avail, along with many other patterns. Everything I do returns the `page`, a huge object with no status on it that I can see. Any help is much appreciated. **WHAT WORKED:** thank you to @tomahaug for steering me in the correct direction. My first problem was placement, the listener needs to go be set up before the request is made, I had it just after the request. Makes sense. My biggest issue was assigning the listener to a variable, so that I could call the expect as my last line. Assigning it to a variable caused the `page` to be returned. What I needed to do was just run the test inside the listener. While using `done()` throws and error for me I closed off my test as follows below, the working version of my code: ``` it.only('returns a 400 response if email is taken', async () => { await page.goto(`${process.env.DOMAIN}/sign-up`) await page.waitFor('input[id="Full Name"]') await page.type('input[id="Full Name"]', 'Luke Skywalker') await page.type('input[id="Email"]', '[email protected]') await page.type('input[id="Password"]', 'LukeSkywalker123', {delay: 100}) await page.on('response', response => { if (response.request().method === 'POST' && response.url === `${process.env.USERS_API_DOMAIN}/sessions`) { expect(response.status).toEqual(400) } }) await page.click('input[type="submit"]', {delay: 1000}) }) after(async function () { await browser.close() }) ``` Hope this helps someone else!
I believe you should do something along those lines. Note the callback function `done`. What the code does, is that it attaches a listener for responses, then clicks the submit button. When a response is received it checks the status code, asserts it, and terminates the test by calling `done`. You might want to have an `if`-statement that checks that it is the actual response from your form that you are checking in the callback, as the response handler might emit events for other concurrent requests. ``` it.only('returns a 400 response if email is taken', () => { await page.goto(`${process.env.DOMAIN}/sign-up`) await page.waitFor('input[id="Full Name"]') await page.type('input[id="Full Name"]', 'Luke Skywalker') await page.type('input[id="Email"]', '[email protected]') await page.type('input[id="Password"]', 'LukeSkywalker123', {delay: 100}) page.on('response', (response) => { if ( response.request().method === 'POST' && response.url === `${process.env.USERS_API_DOMAIN}/sessions`) { expect(response.status).toEqual(400) } }) await page.click('input[type="submit"]', {delay: 1000}) }) ``` I have not tested the code, but it should give you the right idea. Edit: Adjusted to reflect what worked out in the end.
How can I detect when the XAML Slider is Completed? In XAML I have the `<Slider />`. It has the ValueChanged event. This event fires with every change to Value. I need to detect when the value change is over. LostFocus, PointerReleased are not the correct event. How can I detect this?
You can create a new class and inherit from Slider. From there on, you can look for the Thumb control & listen for the events you want. Something like this should work: ``` public class SliderValueChangeCompletedEventArgs : RoutedEventArgs { private readonly double _value; public double Value { get { return _value; } } public SliderValueChangeCompletedEventArgs(double value) { _value = value; } } public delegate void SlideValueChangeCompletedEventHandler(object sender, SliderValueChangeCompletedEventArgs args); public class ExtendedSlider : Slider { public event SlideValueChangeCompletedEventHandler ValueChangeCompleted; private bool _dragging = false; protected void OnValueChangeCompleted(double value) { if (ValueChangeCompleted != null) { ValueChangeCompleted(this, new SliderValueChangeCompletedEventArgs(value) ); } } protected override void OnApplyTemplate() { base.OnApplyTemplate(); var thumb = base.GetTemplateChild("HorizontalThumb") as Thumb; if (thumb != null) { thumb.DragStarted += ThumbOnDragStarted; thumb.DragCompleted += ThumbOnDragCompleted; } thumb = base.GetTemplateChild("VerticalThumb") as Thumb; if (thumb != null) { thumb.DragStarted += ThumbOnDragStarted; thumb.DragCompleted += ThumbOnDragCompleted; } } private void ThumbOnDragCompleted(object sender, DragCompletedEventArgs e) { _dragging = false; OnValueChangeCompleted(this.Value); } private void ThumbOnDragStarted(object sender, DragStartedEventArgs e) { _dragging = true; } protected override void OnValueChanged(double oldValue, double newValue) { base.OnValueChanged(oldValue, newValue); if (!_dragging) { OnValueChangeCompleted(newValue); } } } ```
R Shiny: reactiveValues vs reactive This question is related to [this one](https://stackoverflow.com/questions/18324397). The two can generate the same functionality, but implementation is slightly different. One significant difference is that a `reactiveValue` is a container that can have several values, like `input$`. In [shiny documentation](http://shiny.rstudio.com/articles/reactivity-overview.html) functionality is usually implemented using `reactive()`, but in most cases I find `reactiveValues()` more convenient. Is there any catch here? Are there any other major differences between the two that I might not be aware off? Are these two code snippets equivalent? See the same [example code](http://shiny.rstudio.com/articles/reactivity-overview.html) implemented using: 1. a reactive expression: ``` library(shiny) ui <- fluidPage( shiny::numericInput(inputId = 'n',label = 'n',value = 2), shiny::textOutput('nthValue'), shiny::textOutput('nthValueInv') ) fib <- function(n) ifelse(n<3, 1, fib(n-1)+fib(n-2)) server<-shinyServer(function(input, output, session) { currentFib <- reactive({ fib(as.numeric(input$n)) }) output$nthValue <- renderText({ currentFib() }) output$nthValueInv <- renderText({ 1 / currentFib() }) }) shinyApp(ui = ui, server = server) ``` 2. a reactive value: ``` library(shiny) ui <- fluidPage( shiny::numericInput(inputId = 'n',label = 'n',value = 2), shiny::textOutput('nthValue'), shiny::textOutput('nthValueInv') ) fib <- function(n) ifelse(n<3, 1, fib(n-1)+fib(n-2)) server<-shinyServer(function(input, output, session) { myReactives <- reactiveValues() observe( myReactives$currentFib <- fib(as.numeric(input$n)) ) output$nthValue <- renderText({ myReactives$currentFib }) output$nthValueInv <- renderText({ 1 / myReactives$currentFib }) }) shinyApp(ui = ui, server = server) ```
There is a catch, though it won't come into play in your example. The **shiny** developers designed `reactive()` to be **lazy**, meaning that the expression contained in it will only be executed when it is called by one of its dependents. When one of its reactive dependencies is changed, it clears its cache and notifies its own dependents, but it is not itself executed until asked to by one of those dependents. (So if, say, its sole dependent is a `textOutput()` element on a hidden tab, it won't actually be executed unless and until that tab is opened.) `observe()`, on the other hand, is **eager**; the expression that it contains will be executed right away whenever one of its reactive dependencies is changed -- even if it's value is not needed by any of its dependents (and in fact even if *has* no dependents). Such eagerness is *desirable* when you're calling `observe()` for its side-effects, but it can be *wasteful* when you're only using it to pass on the return value of its contents to other reactive expressions or endpoints down the line. Joe Cheng explains this distinction quite well in his 2016 Shiny Developer Conference presentation on "Effective reactive programming", [available here](https://resources.rstudio.com/shiny-developer-conference). See especially the bit starting around 30:20 in the presentation's [second hour](https://resources.rstudio.com/shiny-developer-conference/shinydevcon-reactivity-joecheng-part-2-1080p). If you watch until 40:42 (blink and you'll miss it!) he briefly characterizes the behavior of the `observe()`/`reactiveValue ()` combination that you like.