id
int64 4
73.8M
| title
stringlengths 10
150
| body
stringlengths 17
50.8k
| accepted_answer_id
int64 7
73.8M
| answer_count
int64 1
182
| comment_count
int64 0
89
| community_owned_date
stringlengths 23
27
β | creation_date
stringlengths 23
27
| favorite_count
int64 0
11.6k
β | last_activity_date
stringlengths 23
27
| last_edit_date
stringlengths 23
27
β | last_editor_display_name
stringlengths 2
29
β | last_editor_user_id
int64 -1
20M
β | owner_display_name
stringlengths 1
29
β | owner_user_id
int64 1
20M
β | parent_id
null | post_type_id
int64 1
1
| score
int64 -146
26.6k
| tags
stringlengths 1
125
| view_count
int64 122
11.6M
| answer_body
stringlengths 19
51k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9,709,758 | Don't Display Numbers/Bullets for Ordererd or Unordered List | <p>All,
I'm using the following code to generate some li items in an ol. </p>
<pre><code>$output = '<div id="menu_options">';
$output .= '<ol class="tabs">';
foreach($menu_items as $menu){
$output .= '<li><a href="'.$menu->ID.'" class="menu_page_id">'.$menu->title.'</a></li>';
}
$output .= '</ol>';
$output .= '</div>';
$output .= '<div id="menu_content">This is content</div>';
</code></pre>
<p>This works fine however I'd like to get rid of the 1., 2. etc. Is there a way to remove these and not display anything where these would typically go?</p> | 9,709,788 | 2 | 3 | null | 2012-03-14 20:27:18.663 UTC | 6 | 2022-05-31 13:13:20.547 UTC | null | null | null | null | 1,048,676 | null | 1 | 33 | css | 61,488 | <p>Yes, use a style property of <code>list-style-type:none</code> on the <code>ol</code>, either inline as a <code>style</code> attribute, or in your stylesheet under the <code>.tabs</code> class.</p>
<p>See <a href="http://www.w3.org/TR/CSS21/generate.html#lists">http://www.w3.org/TR/CSS21/generate.html#lists</a></p> |
9,849,719 | What do square brackets mean in array initialization in C? | <pre><code>static uint8_t togglecode[256] = {
[0x3A] CAPSLOCK,
[0x45] NUMLOCK,
[0x46] SCROLLLOCK
};
</code></pre>
<p>What's the meaning of <code>[0x3A]</code> here? I have only learned statements like <code>int a[2] = {1, 2};</code></p> | 9,849,809 | 4 | 1 | null | 2012-03-24 06:44:00.977 UTC | 21 | 2022-01-27 03:33:48.337 UTC | 2012-03-24 14:22:52.877 UTC | null | 966,703 | null | 1,277,877 | null | 1 | 93 | c|arrays | 11,255 | <p>It means initialise the <em>n</em>-th element of the array. The example you've given will mean that:</p>
<pre><code>togglecode[0x3A] == CAPSLOCK
togglecode[0x45] == NUMLOCK
togglecode[0x46] == SCROLLLOCK
</code></pre>
<p>These are called <a href="http://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Designated-Inits.html" rel="noreferrer">"designated initializers"</a>, and are actually part of the C99 standard. However, the syntax without the <code>=</code> is not. From that page:</p>
<blockquote>
<p>An alternative syntax for this which has been obsolete since GCC 2.5 but GCC still accepts is to write <code>[index]</code> before the element value, with no <code>=</code>. </p>
</blockquote> |
9,747,952 | Pane Title in Tmux | <p>On my local machine I've got 3 node.js instances running simultaneously. Each has it's own pane in a tmux window called "servers". The problem is that it's not that easy to figure out which node is running in which pane, 'cause their logs are similar. </p>
<p>What I need is a title for every pane. As I got, tmux itself doesn't have the feature: it has only titles for windows and not for panes. Launching a separate tmux session inside every pane for every node.js instance looks like an overkill. </p>
<p>So is there some small program that launches a command, wrapping its output with a specified status bar?</p>
<p>Thanks in advance</p> | 9,757,133 | 11 | 1 | null | 2012-03-17 06:46:08.913 UTC | 44 | 2022-07-20 11:18:37.953 UTC | null | null | null | null | 314,883 | null | 1 | 112 | command-line|title|tmux|pane | 85,348 | <p><em>tmux</em> does support per-pane titles, but it does not provide a per-pane location to display these titles.</p>
<p>You can set a paneβs title with the escape sequence ESC <code>]2;</code> β¦ ESC <code>\</code> (e.g. see the section called <em>Names and Titles</em> in the <em>tmux</em> manpage). You could do this from the shell like this:</p>
<pre><code>printf '\033]2;%s\033\\' 'title goes here'
</code></pre>
<p>Each paneβs title defaults to the systemβs hostname. By default the active paneβs title is displayed on the right side of the <em>tmux</em> status line (the default global value of the session variable <code>status-right</code> is <code>"#22T" %H:%M %d-%b-%y</code>, which shows 22 characters of the paneβs title, the time, and the date).</p>
<p>So, as long as you are satisfied with being able to see the active paneβs title (i.e. willing to switch panes to see the title of an inactive pane), you can get by with the default functionality. Just send the appropriate title-setting escape sequence before starting the main command for each pane.</p>
<hr>
<p>If you absolutely need a dedicated line to display some per-pane information, then nested <em>tmux</em> sessions may not be as much (unnecessary) βoverkillβ as you might first think.</p>
<p>In the general case, to provide an inviolate status line on some given terminal, you will need a full terminal (re)emulator that sits between the original terminal and a new terminal (one with one fewer lines). Such (re)emulation is needed to translate control sequences sent to the inner terminal and translate them for the original terminal. For example, to maintain a status line at the bottom of the outer terminal, the command</p>
<blockquote>
<p>Move to the last line.</p>
</blockquote>
<p>sent to the inner terminal must be become</p>
<blockquote>
<p>Move to the next to last line.</p>
</blockquote>
<p>when translated for and sent to the outer terminal. Likewise, an LF sent to the inner terminal must become</p>
<blockquote>
<p>If the cursor is on the next to last line, then scroll this line and all the lines above it up one line, to provide a clear next-to-last line (protecting the status line on the last line).
Otherwise, send an LF.</p>
</blockquote>
<p>in the outer terminal.</p>
<p>Programs like <em>tmux</em> and <em>screen</em> are just such terminal re-emulators. Sure, there is a lot of other functionality wrapped around the terminal emulator, but you would need a large chunk of terminal emulation code just to provide a <em>reliable</em> status line.</p>
<hr>
<p>There is, however, a light-weight solution as long as</p>
<ol>
<li>your programs (<em>Node.js</em> instances) have limited terminal interactions with the panes in which they are running (i.e. no cursor positioning), and</li>
<li>you do not resize the panes while your programs are running.</li>
</ol>
<p>Like many terminal emulators, <em>tmux</em> supports a βset scrolling regionβ terminal control command in its panes. You could use this command to limit the scrolling region to the top (or bottom) N-1 lines of the terminal and write some sort of instance-identifying text into the non-scrolling line.</p>
<p>The restrictions (no cursor movement commands allowed, no resizing) are required because the program that is generating the output (e.g. a <em>Node.js</em> instance) has no idea that scrolling has been limited to a particular region. If the output-generating program happened to move the cursor outside of the scrolling region, then the output might become garbled. Likewise, the terminal emulator probably automatically resets the scrolling region when the terminal is resized (so the βnon-scrolling lineβ will probably end up scrolling away).</p>
<p>I wrote a script that uses <code>tput</code> to generate the appropriate control sequences, write into the non-scrolling line, and run a program after moving the cursor into the scrolling region:</p>
<pre><code>#!/bin/sh
# usage: no_scroll_line top|bottom 'non-scrolling line content' command to run with args
#
# Set up a non-scrolling line at the top (or the bottom) of the
# terminal, write the given text into it, then (in the scrolling
# region) run the given command with its arguments. When the
# command has finished, pause with a prompt and reset the
# scrolling region.
get_size() {
set -- $(stty size)
LINES=$1
COLUMNS=$2
}
set_nonscrolling_line() {
get_size
case "$1" in
t|to|top)
non_scroll_line=0
first_scrolling_line=1
scroll_region="1 $(($LINES - 1))"
;;
b|bo|bot|bott|botto|bottom)
first_scrolling_line=0
scroll_region="0 $(($LINES - 2))"
non_scroll_line="$(($LINES - 1))"
;;
*)
echo 'error: first argument must be "top" or "bottom"'
exit 1
;;
esac
clear
tput csr $scroll_region
tput cup "$non_scroll_line" 0
printf %s "$2"
tput cup "$first_scrolling_line" 0
}
reset_scrolling() {
get_size
clear
tput csr 0 $(($LINES - 1))
}
# Set up the scrolling region and write into the non-scrolling line
set_nonscrolling_line "$1" "$2"
shift 2
# Run something that writes into the scolling region
"$@"
ec=$?
# Reset the scrolling region
printf %s 'Press ENTER to reset scrolling (will clear screen)'
read a_line
reset_scrolling
exit "$ec"
</code></pre>
<p>You might use it like this:</p>
<pre><code>tmux split-window '/path/to/no_scroll_line bottom "Node instance foo" node foo.js'
tmux split-window '/path/to/no_scroll_line bottom "Node instance bar" node bar.js'
tmux split-window '/path/to/no_scroll_line bottom "Node instance quux" node quux.js'
</code></pre>
<p>The script should also work outside of <em>tmux</em> as long as the terminal supports and publishes its <code>csr</code> and <code>cup</code> terminfo capabilities.</p> |
8,039,562 | Cannot open shared object file | <p>I am trying to compile one of the projects found here
USB-I2C/SPI/GPIO Interface Adapter.</p>
<p>I downloaded the <code>i2c_bridge-0.0.1-rc2.tgz</code> package. I installed <code>libusb</code> and that seemed to go well with no issues. I go into the <code>i2c_bridge-0.0.1-rc2/</code> directory and make. That compiles. I move into the <code>i2c_bridge-0.0.1-rc2/i2c</code> folder and make. It compiles and gives me <code>./i2c</code>. However, when I run it, it says <code>error while loading shared libraries: libi2cbrdg.so: cannot open shared object file: No such file or directory</code></p>
<p>The makefile in <code>i2c_bridge-0.0.1-rc2/i2c</code> has the library directory as <code>../</code>. The <code>libi2cbrdg.so</code> is in this directory (<code>i2c_bridge-0.0.1-rc2</code>). I also copied the file to <code>/usr/local/lib</code>. An <code>ls</code> of the <code>i2c_bridge-0.0.1-rc2/</code> directory is</p>
<pre><code>i2c i2cbrdg.d i2cbrdg.o libi2cbrdg.a Makefile tests
i2cbrdg.c i2cbrdg.h INSTALL libi2cbrdg.so README u2c4all.sh
</code></pre>
<p>(That <code>i2c</code> is a directory)</p>
<p>If I <code>sudo ./i2c</code>, it still gives me the problem.</p>
<p>I had to take away the <code>-Werror</code> and <code>-noWdecrepated</code> (spelling?) options in all the makefiles to get them to compile, but that shouldn't affect this should it?</p>
<p>What else is necessary for it to find the <code>.so</code> file? If anyone can help me find out what is wrong I would be very grateful. If more information is needed I can post it.</p> | 8,039,774 | 3 | 1 | null | 2011-11-07 16:36:50.653 UTC | 6 | 2021-03-05 17:58:54.003 UTC | 2016-12-19 21:09:53.067 UTC | null | 864,968 | null | 766,251 | null | 1 | 16 | build|find|debian|shared-objects | 54,073 | <p>You have to distinguish between finding <code>so</code>'s at compile-time and at run-time. The <code>-L</code> flag you give at compile-time has nothing to do with localizing the library at run-time. This is rather done via a number of variables and some paths embedded in the library.</p>
<p>The best hot-fix for this problem is often setting LD_LIBRARY_PATH to the directory with the <code>.so</code> file, e.g.:</p>
<pre><code> $ LD_LIBRARY_PATH=.. ./i2c
</code></pre>
<p>For a long-term solution, you need to either have a close look at the whole LD system with <code>rpath</code> and <code>runpath</code>, or use <code>libtool</code> (which solves these issues for your portably).</p>
<p>Copying a file to <code>/usr/local/lib</code> is often insufficient because <code>ld</code> caches the available libraries, so you need to re-run <code>ldconfig</code> (as root) after you copied a library to <code>/usr/local/lib</code>.</p> |
11,705,337 | javascript/jquery disable submit button on click, prevent double submitting | <p>So I have the submit button that looks like this:</p>
<pre><code><a href="#" id="submitbutton"
onClick="document.getElementById('UploaderSWF').submit();">
<img src="../images/user/create-product.png" border="0" /></a>
</code></pre>
<p>When I double click it double submits obviously, and the problem is that
I'm saving the information in the database so I'll have dublicate information there,
and I dont want that. This uploader uses flash and javscript and here is a little piece
of code that is relevant to the submit thing (if it helps)</p>
<pre><code>$.fn.agileUploaderSubmit = function() {
if($.browser.msie && $.browser.version == '6.0') {
window.document.agileUploaderSWF.submit();
} else {
document.getElementById('agileUploaderSWF').submit();
return false;
}
}
</code></pre>
<p>Thank you guys.
I appreciate your help. This is really something I was unable to do myself
because I have such a little experience with js and I dont really know how
to do stuff.
THANKS.</p> | 11,705,359 | 5 | 0 | null | 2012-07-28 22:59:49.2 UTC | 2 | 2019-11-14 10:50:52.86 UTC | 2012-07-28 23:24:06.64 UTC | null | 568,785 | null | 1,381,947 | null | 1 | 8 | javascript|jquery|double-submit-prevention | 45,394 | <p>Try this snipped:</p>
<pre><code>$('#your_submit_id').click(function(){
$(this).attr('disabled');
});
</code></pre>
<p><strong>edit 1</strong></p>
<p>Oh, in your case it is a link and no submit button ...</p>
<pre><code>var submitted = false;
$.fn.agileUploaderSubmit = function() {
if ( false == submitted )
{
submitted = true;
if($.browser.msie && $.browser.version == '6.0') {
window.document.agileUploaderSWF.submit();
} else {
document.getElementById('agileUploaderSWF').submit();
}
}
return false;
}
</code></pre>
<p><strong>edit 2</strong></p>
<p>To simplify this, try this:</p>
<pre><code><!doctype html>
<html dir="ltr" lang="en">
<head>
<meta charset="utf-8" />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<script type="text/javascript">
<!--//--><![CDATA[//><!--
$(document).ready(function()
{
$('#yourSubmitId').click(function()
{
$(this).attr('disabled',true);
/* your submit stuff here */
return false;
});
});
//--><!]]>
</script>
</head>
<body>
<form id="yourFormId" name="yourFormId" method="post" action="#">
<input type="image" id="yourSubmitId" name="yourSubmitId" src="yourImage.png" alt="Submit" />
</form>ββββββββββββββββββββββββββββββββββββββ
</body>
</html>
</code></pre>
<p>Use form elements, like <code><input type="image" /></code>, to submit a form not a normal link.</p>
<p>This works fine!</p>
<p>Take a look at <a href="http://api.jquery.com/jQuery.post/" rel="noreferrer">jQuery.post()</a> to submit your form.</p>
<p>Good luck.</p>
<p><strong>edit 3</strong></p>
<p>This works well for me too:</p>
<pre><code><!doctype html>
<html dir="ltr" lang="en">
<head>
<meta charset="utf-8" />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<script type="text/javascript">
<!--//--><![CDATA[//><!--
$(document).ready(function()
{
var agileUploaderSWFsubmitted = false;
$('#submitbutton').click(function()
{
if ( false == agileUploaderSWFsubmitted )
{
agileUploaderSWFsubmitted = true;
//console.log( 'click event triggered' );
if ( $.browser.msie && $.browser.version == '6.0' )
{
window.document.agileUploaderSWF.submit();
}
else
{
document.getElementById( 'agileUploaderSWF' ).submit();
}
}
return false;
});
});
//--><!]]>
</script>
</head>
<body>
<form id="agileUploaderSWF" name="agileUploaderSWF" method="post" action="http://your.action/script.php">
<input type="text" id="agileUploaderSWF_text" name="agileUploaderSWF_text" />
</form>ββββββββββββββββββββββββββββββββββββββ
<a href="#" id="submitbutton"><img src="../images/user/create-product.png" border="0" /></a>ββββββββββββββββββββββββββββββββββββββ
</body>
</html>
</code></pre>
<p>Hopefully this helps.</p> |
11,810,020 | how to handle session expire basing redis? | <p>I want to implement a session store based on Redis. I would like to put session data into Redis. But I don't know how to handle session-expire. I can loop through all the redis keys (sessionid) and evaluate the last access time and max idle time, thus I need to load all the keys into the client, and there may be 1000m session keys and may lead to very poor I/O performances.<br>
I want to let Redis manage the expire, but there are no listener or callback when the key expire, so it is impossible to trigger HttpSessionListener. Any advice?</p> | 11,815,594 | 1 | 1 | null | 2012-08-04 16:02:57.42 UTC | 26 | 2018-10-25 11:47:32.593 UTC | 2018-10-25 11:47:32.593 UTC | null | 7,621,349 | null | 275,756 | null | 1 | 14 | session|redis|store | 19,150 | <p>So you need your application to be notified when a session expires in Redis.</p>
<p>While Redis does not support this feature, there are a number of tricks you can use to implement it.</p>
<p><strong>Update: From version 2.8.0, Redis does support this <a href="http://redis.io/topics/notifications">http://redis.io/topics/notifications</a></strong></p>
<p>First, people are thinking about it: this is still under discussion, but it might be added to a future version of Redis. See the following issues:</p>
<ul>
<li><a href="https://github.com/antirez/redis/issues/83">https://github.com/antirez/redis/issues/83</a></li>
<li><a href="https://github.com/antirez/redis/issues/594">https://github.com/antirez/redis/issues/594</a></li>
</ul>
<p>Now, here are some solutions you can use with the current Redis versions.</p>
<p><strong>Solution 1: patching Redis</strong></p>
<p>Actually, adding a simple notification when Redis performs key expiration is not that hard. It can be implemented by adding 10 lines to the db.c file of Redis source code. Here is an example:</p>
<p><a href="https://gist.github.com/3258233">https://gist.github.com/3258233</a></p>
<p>This short patch posts a key to the #expired list if the key has expired and starts with a '@' character (arbitrary choice). It can easily be adapted to your needs.</p>
<p>It is then trivial to use the EXPIRE or SETEX commands to set an expiration time for your session objects, and write a small daemon which loops on BRPOP to dequeue from the "#expired" list, and propagate the notification in your application.</p>
<p>An important point is to understand how the expiration mechanism works in Redis. There are actually two different paths for expiration, both active at the same time:</p>
<ul>
<li><p>Lazy (passive) mechanism. The expiration may occur each time a key is accessed.</p></li>
<li><p>Active mechanism. An internal job regularly (randomly) samples a number of keys with expiration set, trying to find the ones to expire.</p></li>
</ul>
<p>Note that the above patch works fine with both paths.</p>
<p>The consequence is Redis expiration time is not accurate. If all the keys have expiration, but only one is about to be expired, and it is not accessed, the active expiration job may take several minutes to find the key and expired it. If you need some accuracy in the notification, this is not the way to go.</p>
<p><strong>Solution 2: simulating expiration with zsets</strong></p>
<p>The idea here is to not rely on the Redis key expiration mechanism, but simulate it by using an additional index plus a polling daemon. It can work with an unmodified Redis 2.6 version.</p>
<p>Each time a session is added to Redis, you can run:</p>
<pre><code>MULTI
SET <session id> <session content>
ZADD to_be_expired <current timestamp + session timeout> <session id>
EXEC
</code></pre>
<p>The to_be_expired sorted set is just an efficient way to access the first keys that should be expired. A daemon can poll on to_be_expired using the following Lua server-side script:</p>
<pre><code>local res = redis.call('ZRANGEBYSCORE',KEYS[1], 0, ARGV[1], 'LIMIT', 0, 10 )
if #res > 0 then
redis.call( 'ZREMRANGEBYRANK', KEYS[1], 0, #res-1 )
return res
else
return false
end
</code></pre>
<p>The command to launch the script would be:</p>
<pre><code>EVAL <script> 1 to_be_expired <current timestamp>
</code></pre>
<p>The daemon will get at most 10 items. For each of them, it has to use the DEL command to remove the sessions, and notify the application. If one item was actually processed (i.e. the return of the Lua script is not empty), the daemon should loop immediately, otherwise a 1 second wait state can be introduced.</p>
<p>Thanks to the Lua script, it is possible to launch several polling daemons in parallel (the script guarantees that a given session will only be processed once, since the keys are removed from to_be_expired by the Lua script itself).</p>
<p><strong>Solution 3: use an external distributed timer</strong></p>
<p>Another solution is to rely on an external distributed timer. The <a href="http://kr.github.com/beanstalkd/">beanstalk lightweight queuing system</a> is a good possibility for this</p>
<p>Each time a session is added in the system, the application posts the session ID to a beanstalk queue with a delay corresponding to the session time out. A daemon is listening to the queue. When it can dequeue an item, it means a session has expired. It just has to clean the session in Redis, and notify the application.</p> |
11,493,463 | MYSQL - Order timestamp values ascending in order, from newest to oldest? | <p>I have come across a problem when trying to order certain results by their timestamp value.</p>
<p>I would like these results displayed from the newest, to the oldest based on the timestamp values.</p>
<p>So to explain this, imagine that there were 3 results:</p>
<pre><code>2012-07-11 17:34:57
2012-07-11 17:33:28
2012-07-11 17:33:07
</code></pre>
<p>This result set would be what I would require, but given the following query</p>
<pre><code>SELECT timestamp
FROM randomTable
ORDER BY timestamp ASC
</code></pre>
<p>I get:</p>
<pre><code>2012-07-11 17:34:57
2012-07-11 17:33:07
2012-07-11 17:33:28
</code></pre>
<p>This is as it is sorted by numerical value and <code>07</code> comes before <code>28</code>.</p>
<p>If i sort in descending order I get</p>
<pre><code>2012-07-11 17:33:07
2012-07-11 17:33:28
2012-07-11 17:34:57
</code></pre>
<p>Which is what I am looking for... But it is in reverse.</p>
<p>So my question is fairly simple, how could I sort these values in ascending order as I have described?</p>
<p>EDIT:</p>
<p><img src="https://i.stack.imgur.com/rKFwC.png" alt="The problem"></p>
<p>EDIT2:</p>
<pre><code>CREATE TABLE `user_quotations` (
`id` int(100) NOT NULL AUTO_INCREMENT,
`quoteNumber` int(100) NOT NULL,
`lastModified` datetime NOT NULL,
`userId` int(100) NOT NULL,
`manufacturer` varchar(250) COLLATE latin1_general_ci NOT NULL,
`modelNumber` varchar(250) COLLATE latin1_general_ci NOT NULL,
`productDesc` varchar(1000) COLLATE latin1_general_ci NOT NULL,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `quoteNumber` (`quoteNumber`,`lastModified`,`userId`,`manufacturer`,`modelNumber`,`timestamp`),
KEY `productDesc` (`productDesc`)
) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci
</code></pre> | 11,493,788 | 4 | 14 | null | 2012-07-15 16:09:54.927 UTC | null | 2014-09-29 05:38:27.597 UTC | 2013-03-02 00:04:09.467 UTC | null | 398,242 | null | 2,110,294 | null | 1 | 16 | mysql|timestamp|sql-order-by | 109,799 | <p>Your query : </p>
<pre><code>SELECT timestamp
FROM randomTable
ORDER BY timestamp ASC;
</code></pre>
<p>is perfect. But I doubt about the results you have presented in your posting.
You posted : </p>
<pre><code>2012-07-11 17:34:57
2012-07-11 17:33:07
2012-07-11 17:33:28
</code></pre>
<p>But results in your sqlbox shows : </p>
<pre><code>2012-07-11 17:34:57
2012-07-15 17:33:07
2012-07-15 17:33:28
</code></pre>
<p>Which are perfectly right. </p>
<p>Is that a typo error in your posting?<br>
If no, then try the following : </p>
<pre><code>SELECT timestamp( `timestamp` ) as 'timestamp'
FROM randomTable
ORDER BY 1 ASC;
</code></pre> |
11,979,017 | Changing facet label to math formula in ggplot2 | <p>I wonder how to change the <code>facet</code> label to math formula in <code>ggplot2</code>. </p>
<pre><code>d <- ggplot(diamonds, aes(carat, price, fill = ..density..)) +
xlim(0, 2) + stat_binhex(na.rm = TRUE) + opts(aspect.ratio = 1)
d + facet_wrap(~ color, ncol = 4)
</code></pre>
<p><img src="https://i.stack.imgur.com/XxIpB.png" alt="enter image description here"></p>
<p>For example, I want to change facet label from <code>D</code> to <code>Y[1]</code>, where 1 is subscript. Thanks in advance for your help.</p>
<p>I found this <a href="https://stackoverflow.com/a/6539953/707145">answer</a> but it does not work for me. I'm using <code>R 2.15.1</code> and <code>ggplot2 0.9.1</code>.</p> | 11,979,544 | 5 | 4 | null | 2012-08-16 00:08:52.093 UTC | 13 | 2021-04-22 13:12:09.65 UTC | 2017-05-23 10:30:55.803 UTC | null | -1 | null | 707,145 | null | 1 | 17 | r|ggplot2 | 7,719 | <p>Perhaps somebody has changed the name of the edit-Grob function at some point. <a href="https://github.com/hadley/ggplot2/commit/1bc4d1e179536e3404418a9589d982c242a40793" rel="nofollow noreferrer">(Edit: It was removed by @hadley about 8 months ago.)</a> There is no <code>geditGrob</code> but just <code>editGrob</code> from pkg:grid seems to work:</p>
<pre><code> d <- ggplot(diamonds, aes(carat, price, fill = ..density..)) +
xlim(0, 2) + stat_binhex(na.rm = TRUE) + opts(aspect.ratio = 1)
#Note: changes in ggplot2 functions cause this to fail from the very beginning now.
# Frank Harrell's answer this year suggests `facet_warp` now accepts `labeller`
d <- d + facet_wrap(~ color, ncol = 4)
grob <- ggplotGrob(d)
strip_elem <- grid.ls(getGrob(grob, "strip.text.x", grep=TRUE, global=TRUE))$name
#strip.text.x.text.1535
#strip.text.x.text.1541
#strip.text.x.text.1547
#strip.text.x.text.1553
#strip.text.x.text.1559
#strip.text.x.text.1565
#strip.text.x.text.1571
grob <- editGrob(grob, strip_elem[1], label=expression(Y[1]))
grid.draw(grob)
</code></pre> |
11,725,594 | html navigator "User denied Geolocation" | <p>I have an issue with window.navigator, I'm getting error code 1, "User denied Geolocation" whenever I run the following code as a local html file:</p>
<pre><code>navigator.geolocation.getCurrentPosition(function(position) {
console.log(position);
}, function(positionError) {
console.log(positionError);
});
</code></pre>
<p>The output is coming from the error function, positionError contains:</p>
<pre><code>code: 1
message: "User denied Geolocation"
</code></pre>
<p>This does not happen if the containing html is served from some server.</p>
<p>Is this expected? Is there some way to use navigator from a local html? I am trying to write some mobile app, but am also trying to avoid network whenever possible.</p>
<p>thanks.</p> | 11,726,296 | 3 | 0 | null | 2012-07-30 16:27:15.977 UTC | 5 | 2022-03-20 02:17:59.47 UTC | 2012-07-30 19:18:41.697 UTC | null | 1,322,767 | null | 389,023 | null | 1 | 20 | javascript|html|geolocation|navigator | 62,474 | <p>If you are using chrome, please have a look at the answer below:</p>
<p><a href="https://stackoverflow.com/questions/5423938/html-5-geo-location-prompt-in-chrome">HTML 5 Geo Location Prompt in Chrome</a></p>
<p>It appears this is a security restriction for the <code>file</code> protocol. Looks like you are going to need to host it locally from a server.</p> |
11,692,560 | Elasticsearch, Tire, and Nested queries / associations with ActiveRecord | <p>I'm using ElasticSearch with Tire to index and search some ActiveRecord models, and I've been searching for the "right" way to index and search associations. I haven't found what seems like a best practice for this, so I wanted to ask if anyone has an approach that they think works really well.</p>
<p>As an example setup (this is made up but illustrates the problem), let's say we have a book, with chapters. Each book has a title and author, and a bunch of chapters. Each chapter has text. We want to index the book's fields and the chapters' text so you can search for a book by author, or for any book with certain words in it.</p>
<pre><code>class Book < ActiveRecord::Base
include Tire::Model::Search
include Tire::Model::Callbacks
has_many :chapters
mapping do
indexes :title, :analyzer => 'snowball', :boost => 100
indexes :author, :analyzer => 'snowball'
indexes :chapters, type: 'object', properties: {
chapter_text: { type: 'string', analyzer: 'snowball' }
}
end
end
class Chapter < ActiveRecord::Base
belongs_to :book
end
</code></pre>
<p>So then I do the search with:</p>
<pre><code>s = Book.search do
query { string query_string }
end
</code></pre>
<p>That doesn't work, even though it seems like that indexing should do it. If instead I index:</p>
<pre><code>indexes :chapters, :as => 'chapters.map{|c| c.chapter_text}.join('|'), :analyzer => 'snowball'
</code></pre>
<p>That makes the text searchable, but obviously it's not a nice hack and it loses the actual associated object. I've tried variations of the searching, like:</p>
<pre><code>s = Book.search do
query do
boolean do
should { string query_string }
should { string "chapters.chapter_text:#{query_string}" }
end
end
end
</code></pre>
<p>With no luck there, either. If anyone has a good, clear example of indexing and searching associated ActiveRecord objects using Tire, it seems like that would be a really good addition to the knowledge base here.</p>
<p>Thanks for any ideas and contributions.</p> | 11,711,477 | 2 | 2 | null | 2012-07-27 17:17:47.973 UTC | 18 | 2013-10-10 02:48:59.743 UTC | 2013-07-06 11:11:38.773 UTC | null | 95,696 | null | 148,725 | null | 1 | 28 | elasticsearch|tire | 10,108 | <p>The support for ActiveRecord associations in Tire is working, but requires couple of tweaks inside your application. There's no question the library should do better job here, and in the future it certainly will.</p>
<p>That said, here is a full-fledged example of Tire configuration to work with Rails' associations in elasticsearch: <a href="https://gist.github.com/3200212" rel="nofollow noreferrer">active_record_associations.rb</a></p>
<p>Let me highlight couple of things here.</p>
<h3>Touching the parent</h3>
<p>First, you have to ensure you notify the parent model of the association about changes in the association.</p>
<p>Given we have a <code>Chapter</code> model, which βbelongs toβ a <code>Book</code>, we need to do:</p>
<pre><code>class Chapter < ActiveRecord::Base
belongs_to :book, touch: true
end
</code></pre>
<p>In this way, when we do something like:</p>
<pre><code>book.chapters.create text: "Lorem ipsum...."
</code></pre>
<p>The <code>book</code> instance is notified about the added chapter.</p>
<h3>Responding to touches</h3>
<p>With this part sorted, we need to notify <em>Tire</em> about the change, and update the elasticsearch index accordingly:</p>
<pre><code>class Book < ActiveRecord::Base
has_many :chapters
after_touch() { tire.update_index }
end
</code></pre>
<p>(There's no question <em>Tire</em> should intercept <code>after_touch</code> notifications by itself, and not force you to do this. It is, on the other hand, a testament of how easy is to work your way around the library limitations in a manner which does not hurt your eyes.)</p>
<h3>Proper JSON serialization in Rails < 3.1</h3>
<p>Despite the README mentions you have to disable automatic "adding root key in JSON" in Rails < 3.1, many people forget it, so you have to include it in the class definition as well:</p>
<pre><code>self.include_root_in_json = false
</code></pre>
<h3>Proper mapping for elasticsearch</h3>
<p>Now comes the meat of our work -- defining proper mapping for our documents (models):</p>
<pre><code>mapping do
indexes :title, type: 'string', boost: 10, analyzer: 'snowball'
indexes :created_at, type: 'date'
indexes :chapters do
indexes :text, analyzer: 'snowball'
end
end
</code></pre>
<p>Notice we index <code>title</code> with boosting, <code>created_at</code> as "date", and chapter text from the associated model. All the data are effectively βde-normalizedβ as a single document in elasticsearch (if such a term would make slight sense).</p>
<h3>Proper document JSON serialization</h3>
<p>As the last step, we have to properly serialize the document in the elasticsearch index. Notice how we can leverage the convenient <code>to_json</code> method from <em>ActiveRecord</em>:</p>
<pre><code>def to_indexed_json
to_json( include: { chapters: { only: [:text] } } )
end
</code></pre>
<p>With all this setup in place, we can search in properties in both the <code>Book</code> and the <code>Chapter</code> parts of our document.</p>
<p>Please run the <a href="https://gist.github.com/3200212" rel="nofollow noreferrer">active_record_associations.rb</a> Ruby file linked at the beginning to see the full picture.</p>
<p>For further information, please refer to these resources:</p>
<ul>
<li><a href="https://github.com/karmi/railscasts-episodes/commit/ee1f6f3" rel="nofollow noreferrer">https://github.com/karmi/railscasts-episodes/commit/ee1f6f3</a></li>
<li><a href="https://github.com/karmi/railscasts-episodes/commit/03c45c3" rel="nofollow noreferrer">https://github.com/karmi/railscasts-episodes/commit/03c45c3</a></li>
<li><a href="https://github.com/karmi/tire/blob/master/test/models/active_record_models.rb#L10-20" rel="nofollow noreferrer">https://github.com/karmi/tire/blob/master/test/models/active_record_models.rb#L10-20</a></li>
</ul>
<p>See this StackOverflow answer: <a href="https://stackoverflow.com/questions/11672072/elasticsearch-tire-using-mapping-and-to-indexed-json/11700251#11700251">ElasticSearch & Tire: Using Mapping and to_indexed_json</a> for more information about <code>mapping</code> / <code>to_indexed_json</code> interplay.</p>
<p>See this StackOverflow answer: <a href="https://stackoverflow.com/questions/13600086/index-the-results-of-a-method-in-elasticsearch-tire-activerecord/13847929#13847929">Index the results of a method in ElasticSearch (Tire + ActiveRecord)</a> to see how to fight n+1 queries when indexing models with associations.</p> |
11,528,078 | Determining duplicate values in an array | <p>Suppose I have an array</p>
<pre><code>a = np.array([1, 2, 1, 3, 3, 3, 0])
</code></pre>
<p>How can I (efficiently, Pythonically) find which elements of <code>a</code> are duplicates (i.e., non-unique values)? In this case the result would be <code>array([1, 3, 3])</code> or possibly <code>array([1, 3])</code> if efficient.</p>
<p>I've come up with a few methods that appear to work:</p>
<h3>Masking</h3>
<pre><code>m = np.zeros_like(a, dtype=bool)
m[np.unique(a, return_index=True)[1]] = True
a[~m]
</code></pre>
<h3>Set operations</h3>
<pre><code>a[~np.in1d(np.arange(len(a)), np.unique(a, return_index=True)[1], assume_unique=True)]
</code></pre>
<p>This one is cute but probably illegal (as <code>a</code> isn't actually unique):</p>
<pre><code>np.setxor1d(a, np.unique(a), assume_unique=True)
</code></pre>
<h3>Histograms</h3>
<pre><code>u, i = np.unique(a, return_inverse=True)
u[np.bincount(i) > 1]
</code></pre>
<h3>Sorting</h3>
<pre><code>s = np.sort(a, axis=None)
s[:-1][s[1:] == s[:-1]]
</code></pre>
<h3>Pandas</h3>
<pre><code>s = pd.Series(a)
s[s.duplicated()]
</code></pre>
<p>Is there anything I've missed? I'm not necessarily looking for a numpy-only solution, but it has to work with numpy data types and be efficient on medium-sized data sets (up to 10 million in size).</p>
<hr/>
<h2>Conclusions</h2>
<p>Testing with a 10 million size data set (on a 2.8GHz Xeon):</p>
<pre><code>a = np.random.randint(10**7, size=10**7)
</code></pre>
<p>The fastest is sorting, at 1.1s. The dubious <code>xor1d</code> is second at 2.6s, followed by masking and Pandas <code>Series.duplicated</code> at 3.1s, <code>bincount</code> at 5.6s, and <code>in1d</code> and senderle's <code>setdiff1d</code> both at 7.3s. Steven's <code>Counter</code> is only a little slower, at 10.5s; trailing behind are Burhan's <code>Counter.most_common</code> at 110s and DSM's <code>Counter</code> subtraction at 360s.</p>
<p>I'm going to use sorting for performance, but I'm accepting Steven's answer because the performance is acceptable and it <em>feels</em> clearer and more Pythonic.</p>
<p>Edit: discovered the Pandas solution. If Pandas is available it's clear and performs well.</p> | 11,528,581 | 10 | 6 | null | 2012-07-17 17:52:00.047 UTC | 28 | 2022-03-24 23:46:32.603 UTC | 2017-11-13 17:43:35.097 UTC | null | 567,292 | null | 567,292 | null | 1 | 80 | python|numpy|duplicates|unique | 95,192 | <p>I think this is most clear done outside of <code>numpy</code>. You'll have to time it against your <code>numpy</code> solutions if you are concerned with speed.</p>
<pre><code>>>> import numpy as np
>>> from collections import Counter
>>> a = np.array([1, 2, 1, 3, 3, 3, 0])
>>> [item for item, count in Counter(a).items() if count > 1]
[1, 3]
</code></pre>
<p><em>note:</em> This is similar to Burhan Khalid's answer, but the use of <code>items</code> without subscripting in the condition should be faster.</p> |
11,532,636 | How to prevent http file caching in Apache httpd (MAMP) | <p>I am developing a single page Javascript application in MAMP. My JavaScript and HTML template files are getting cached between requests.</p>
<p>Is there a simple way to indicate in MAMP that I want to prevent http file caching? Possibly with a <code>.htaccess</code> file? Where do I place the <code>.htaccess</code> or modify the virtual host for MAMP on Mac?</p> | 11,724,596 | 5 | 0 | null | 2012-07-18 00:16:02.847 UTC | 70 | 2019-09-03 06:49:48.447 UTC | 2016-11-24 16:20:08.84 UTC | null | 323,041 | null | 488,004 | null | 1 | 149 | apache|.htaccess|http-headers|mamp | 207,120 | <p>Tried this? Should work in both <code>.htaccess</code>, <code>httpd.conf</code> and in a <code>VirtualHost</code> (usually placed in <code>httpd-vhosts.conf</code> if you have included it from your httpd.conf)</p>
<pre><code><filesMatch "\.(html|htm|js|css)$">
FileETag None
<ifModule mod_headers.c>
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</ifModule>
</filesMatch>
</code></pre>
<blockquote>
<p>100% Prevent Files from being cached</p>
<p>This is similar to how google ads employ the header Cache-Control: private, x-gzip-ok="" > to prevent caching of ads by proxies and clients.</p>
</blockquote>
<p>From <a href="http://www.askapache.com/htaccess/using-http-headers-with-htaccess.html" rel="noreferrer">http://www.askapache.com/htaccess/using-http-headers-with-htaccess.html</a></p>
<p>And optionally add the extension for the template files you are retrieving if you are using an extension other than <code>.html</code> for those.</p> |
20,091,233 | Declare a void function in C | <p>I am learning C and I am studying functions. So, I read that when I implement my own function I have to declare it before the main(). If I miss the declaration the compiler will get an error message.</p>
<p>As I was studying this example (finds if the number is a prime number),</p>
<pre><code>#include <stdio.h>
void prime(); // Function prototype(declaration)
int main()
{
int num, i, flag;
num = input(); // No argument is passed to input()
for(i=2,flag=i; i<=num/2; ++i,flag=i)
{
flag = i;
if(num%i==0)
{
printf("%d is not prime\n", num);
++flag;
break;
}
}
if(flag==i)
printf("%d is prime\n", num);
return 0;
}
int input() /* Integer value is returned from input() to calling function */
{
int n;
printf("\nEnter positive enter to check: ");
scanf("%d", &n);
return n;
}
</code></pre>
<p>I noticed that a function prime() is declared, but in the main, a function, input(), is called and also the function input() is implemented at the bottom. Ok, I thought it was a mistake and I change the name from prime to input.</p>
<p><strong>However</strong> if I delete the declaration and I donβt put any there, the program is compiled without errors and it runs smoothly. (I compile and run it on Ubuntu.)</p>
<p>Is it necessary to declare a void function with not arguments?</p> | 20,091,295 | 3 | 9 | null | 2013-11-20 08:52:40.873 UTC | 5 | 2020-03-11 13:49:33.91 UTC | 2020-03-11 13:33:36.453 UTC | null | 63,550 | null | 2,594,961 | null | 1 | 2 | c|function|declare | 77,646 | <p>If you don't have a forward declaration of your function before the place of usage, the compiler will create implicit declaration for you - with the signature <code>int input()</code>. It will take the name of the function you called, it will assume that the function is returning <code>int</code>, and it can accept <em>any</em> arguments (as <a href="https://stackoverflow.com/users/752976/bartek-banachewicz">Bartek</a> noted in the comment).</p>
<p>For this function, the implicit declaration matches the real declaration, so you don't have problems. However, you should always be careful about this, and <strong>you should always prefer forward declarations instead of implicit ones</strong> (no matter if they are same or not). So, instead of just having forward declaration of the <code>void prime()</code> function (assuming that you will use it somewhere), you should also have a forward declaration of <code>int input()</code>.</p>
<p>To see how can you pass any number of the arguments, consider this:</p>
<pre><code>#include <stdio.h>
// Takes any number of the arguments
int foo();
// Doesn't takes any arguments
int bar(void)
{
printf("Hello from bar()!\n");
return 0;
}
int main()
{
// Both works
// However, this will print junk as you're not pushing
// Any arguments on the stack - but the compiler will assume you are
foo();
// This will print 1, 2, 3
foo(1, 2, 3);
// Works
bar();
// Doesn't work
// bar(1, 2, 3);
return 0;
}
// Definition
int foo(int i, int j, int k)
{
printf("%d %d %d\n", i, j, k);
return 0;
}
</code></pre>
<p>So, inside the <strong>definition</strong> of the function you're describing function arguments. However, <strong>declaration</strong> of the function is telling the compiler not to do any checks on the parameters.</p> |
3,977,167 | NameError: global name is not defined | <p>I'm using Python 2.6.1 on Mac OS X.</p>
<p>I have two simple Python files (below), but when I run</p>
<pre><code>python update_url.py
</code></pre>
<p>I get on the terminal:</p>
<pre><code>Traceback (most recent call last):
File "update_urls.py", line 7, in <module>
main()
File "update_urls.py", line 4, in main
db = SqliteDBzz()
NameError: global name 'SqliteDBzz' is not defined
</code></pre>
<p>I tried renaming the files and classes differently, which is why there's x and z on the ends. ;)</p>
<h3>File sqlitedbx.py</h3>
<pre><code>class SqliteDBzz:
connection = ''
curser = ''
def connect(self):
print "foo"
def find_or_create(self, table, column, value):
print "baar"
</code></pre>
<h3>File update_url.py</h3>
<pre><code>import sqlitedbx
def main():
db = SqliteDBzz()
db.connect
if __name__ == "__main__":
main()
</code></pre> | 3,977,194 | 4 | 0 | null | 2010-10-20 11:17:36.413 UTC | 2 | 2013-03-26 13:36:30.87 UTC | 2012-06-10 21:21:24.533 UTC | Roger Pate | 63,550 | null | 133,498 | null | 1 | 26 | python|class|namespaces | 131,606 | <p>You need to do:</p>
<pre><code>import sqlitedbx
def main():
db = sqlitedbx.SqliteDBzz()
db.connect()
if __name__ == "__main__":
main()
</code></pre> |
3,551,055 | How to get name of the computer in VBA? | <p>Is there a way to get the name of the computer in VBA?</p> | 3,551,071 | 4 | 0 | null | 2010-08-23 19:38:41.767 UTC | 11 | 2021-02-23 08:55:32.02 UTC | 2021-02-23 08:55:32.02 UTC | null | 5,779,732 | null | 117,700 | null | 1 | 47 | vba|environment-variables|computer-name | 133,886 | <pre><code>Dim sHostName As String
' Get Host Name / Get Computer Name
sHostName = Environ$("computername")
</code></pre> |
3,848,390 | Is there any point using MySQL "LIMIT 1" when querying on indexed/unique field? | <p>For example, I'm querying on a field I know will be unique and is indexed such as a primary key. Hence I know this query will only return 1 row (even without the LIMIT 1)</p>
<p><code>SELECT * FROM tablename WHERE tablename.id=123 LIMIT 1</code></p>
<p>or only update 1 row</p>
<p><code>UPDATE tablename SET somefield='somevalue' WHERE tablename.id=123 LIMIT 1</code></p>
<p>Would adding the <code>LIMIT 1</code> improve query execution time if the field is indexed?</p> | 3,848,445 | 5 | 0 | null | 2010-10-03 03:04:46.717 UTC | 9 | 2020-02-12 09:15:50.737 UTC | null | null | null | null | 188,365 | null | 1 | 44 | sql|mysql|performance|limit|indexing | 6,269 | <h2>Is there any point using MySQL βLIMIT 1β when querying on primary key/unique field?</h2>
<p>It is not good practice to use <code>LIMIT 1</code> when querying with filter criteria that is against either a primary key or unique constraint. A primary key, or unique constraint, means there is only one row/record in the table with that value, only one row/record will ever be returned. It's contradictory to have <code>LIMIT 1</code> on a primary key/unique field--someone maintaining the code later could mistake the importance & second guess your code.</p>
<p>But the ultimate indicator is the explain plan:</p>
<pre><code>explain SELECT t.name FROM USERS t WHERE t.userid = 4
</code></pre>
<p>...returns:</p>
<pre><code>id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
-----------------------------------------------------------------------------------------------------
1 | SIMPLE | users | const | PRIMARY | PRIMARY | 4 | const | 1 |
</code></pre>
<p>...and:</p>
<pre><code>explain SELECT t.name FROM USERS t WHERE t.userid = 4 LIMIT 1
</code></pre>
<p>...returns:</p>
<pre><code>id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
-----------------------------------------------------------------------------------------------------
1 | SIMPLE | users | const | PRIMARY | PRIMARY | 4 | const | 1 |
</code></pre>
<h2>Conclusion</h2>
<p>No difference, no need. It appears to be optimized out in this case (only searching against the primary key).</p>
<h2>What about an indexed field?</h2>
<p>An indexed field doesn't guarantee uniqueness of the value being filtered, there could be more than one occurrence. So <code>LIMIT 1</code> would make sense, assuming you want to return one row.</p> |
3,491,811 | Node.js and CPU intensive requests | <p>I've started tinkering with Node.js HTTP server and really like to write server side Javascript but something is keeping me from starting to use Node.js for my web application.</p>
<p>I understand the whole async I/O concept but I'm somewhat concerned about the edge cases where procedural code is very CPU intensive such as image manipulation or sorting large data sets.</p>
<p>As I understand it, the server will be very fast for simple web page requests such as viewing a listing of users or viewing a blog post. However, if I want to write very CPU intensive code (in the admin back end for example) that generates graphics or resizes thousands of images, the request will be very slow (a few seconds). Since this code is not async, every requests coming to the server during those few seconds will be blocked until my slow request is done. </p>
<p>One suggestion was to use Web Workers for CPU intensive tasks. However, I'm afraid web workers will make it hard to write clean code since it works by including a separate JS file. What if the CPU intensive code is located in an object's method? It kind of sucks to write a JS file for every method that is CPU intensive.</p>
<p>Another suggestion was to spawn a child process, but that makes the code even less maintainable.</p>
<p>Any suggestions to overcome this (perceived) obstacle? How do you write clean object oriented code with Node.js while making sure CPU heavy tasks are executed async? </p> | 3,536,183 | 5 | 1 | null | 2010-08-16 09:04:31.567 UTC | 147 | 2018-06-20 10:27:21.26 UTC | null | null | null | null | 96,855 | null | 1 | 233 | javascript|node.js|serverside-javascript|web-worker | 57,720 | <p>What you need is a task queue! Moving your long running tasks out of the web-server is a GOOD thing. Keeping each task in "separate" js file promotes modularity and code reuse. It forces you to think about how to structure your program in a way that will make it easier to debug and maintain in the long run. Another benefit of a task queue is the workers can be written in a different language. Just pop a task, do the work, and write the response back.</p>
<p>something like this <a href="https://github.com/resque/resque" rel="noreferrer">https://github.com/resque/resque</a></p>
<p>Here is an article from github about why they built it <a href="http://github.com/blog/542-introducing-resque" rel="noreferrer">http://github.com/blog/542-introducing-resque</a></p> |
3,637,936 | Java: Integer equals vs. == | <p>As of Java 1.5, you can pretty much interchange <code>Integer</code> with <code>int</code> in many situations.</p>
<p>However, I found a potential defect in my code that surprised me a bit.</p>
<p>The following code:</p>
<pre><code>Integer cdiCt = ...;
Integer cdsCt = ...;
...
if (cdiCt != null && cdsCt != null && cdiCt != cdsCt)
mismatch = true;
</code></pre>
<p>appeared to be incorrectly setting mismatch when the values were equal, although I can't determine under what circumstances. I set a breakpoint in Eclipse and saw that the <code>Integer</code> values were both 137, and I inspected the boolean expression and it said it was false, but when I stepped over it, it was setting mismatch to true.</p>
<p>Changing the conditional to:</p>
<pre><code>if (cdiCt != null && cdsCt != null && !cdiCt.equals(cdsCt))
</code></pre>
<p>fixed the problem.</p>
<p>Can anyone shed some light on why this happened? So far, I have only seen the behavior on my localhost on my own PC. In this particular case, the code successfully made it past about 20 comparisons, but failed on 2. The problem was consistently reproducible.</p>
<p>If it is a prevalent problem, it should be causing errors on our other environments (dev and test), but so far, no one has reported the problem after hundreds of tests executing this code snippet.</p>
<p>Is it still not legitimate to use <code>==</code> to compare two <code>Integer</code> values?</p>
<p>In addition to all the fine answers below, the following stackoverflow link has quite a bit of additional information. It actually would have answered my original question, but because I didn't mention autoboxing in my question, it didn't show up in the selected suggestions:</p>
<p><a href="https://stackoverflow.com/questions/2602636/why-cant-the-compiler-jvm-just-make-autoboxing-just-work">Why can't the compiler/JVM just make autoboxing βjust workβ?</a></p> | 3,637,974 | 7 | 0 | null | 2010-09-03 17:03:40.237 UTC | 52 | 2020-09-24 08:13:33.517 UTC | 2017-05-23 12:26:06.337 UTC | null | -1 | null | 285,878 | null | 1 | 190 | java|integer|wrapper|primitive|equals-operator | 205,358 | <p>The JVM is caching Integer values. Hence the comparison with <code>==</code> only works for numbers between -128 and 127.</p>
<p>Refer: <a href="http://www.owasp.org/index.php/Java_gotchas#Immutable_Objects_.2F_Wrapper_Class_Caching" rel="noreferrer">#Immutable_Objects_.2F_Wrapper_Class_Caching</a></p> |
3,291,152 | Ruby on Rails 3 howto make 'OR' condition | <p>I need an SQL statement that check if one condition is satisfied:</p>
<pre><code>SELECT * FROM my_table WHERE my_table.x=1 OR my_table.y=1
</code></pre>
<p>I want to do this the 'Rails 3' way. I was looking for something like:</p>
<pre><code>Account.where(:id => 1).or.where(:id => 2)
</code></pre>
<p>I know that I can always fallback to sql or a conditions string. However, in my experience this often leads to chaos when combining scopes. What is the best way to do this?</p>
<p>Another related question, is how can describe relationship that depends on an OR condition. The only way I found:</p>
<pre><code>has_many :my_thing, :class_name => "MyTable", :finder_sql => 'SELECT my_tables.* ' + 'FROM my_tables ' +
'WHERE my_tables.payer_id = #{id} OR my_tables.payee_id = #{id}'
</code></pre>
<p>However, these again breaks when used in combinations. IS there a better way to specify this?</p> | 3,291,767 | 9 | 0 | null | 2010-07-20 14:44:44.73 UTC | 8 | 2017-01-14 11:15:54.737 UTC | 2012-11-14 11:22:58.473 UTC | null | 429,850 | null | 372,981 | null | 1 | 50 | ruby-on-rails|ruby|ruby-on-rails-3 | 59,140 | <p>Sadly, the .or isn't implemented yet (but when it is, it'll be AWESOME).</p>
<p>So you'll have to do something like:</p>
<pre><code>class Project < ActiveRecord::Base
scope :sufficient_data, :conditions=>['ratio_story_completion != 0 OR ratio_differential != 0']
scope :profitable, :conditions=>['profit > 0']
</code></pre>
<p>That way you can still be awesome and do:</p>
<pre><code>Project.sufficient_data.profitable
</code></pre> |
8,200,838 | Is there a way to check the visibility of the status bar? | <p>I have a service that should check periodically visibility of status bar, when some top activity is (or not) in fullscreen mode.
Is it possible?</p> | 9,195,733 | 4 | 0 | null | 2011-11-20 10:13:28.527 UTC | 9 | 2019-01-15 15:11:18.77 UTC | null | null | null | null | 926,568 | null | 1 | 10 | android|android-layout|statusbar | 9,867 | <p>Finally I have discovered how to check if statusbar is visible or not. Its some kind of hack, but it works for me. I created that method in my Service:</p>
<pre><code>private void createHelperWnd() {
WindowManager wm = (WindowManager) getSystemService(WINDOW_SERVICE);
final WindowManager.LayoutParams p = new WindowManager.LayoutParams();
p.type = WindowManager.LayoutParams.TYPE_SYSTEM_OVERLAY;
p.gravity = Gravity.RIGHT | Gravity.TOP;
p.flags = WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE;
p.width = 1;
p.height = LayoutParams.MATCH_PARENT;
p.format = PixelFormat.TRANSPARENT;
helperWnd = new View(this); //View helperWnd;
wm.addView(helperWnd, p);
final ViewTreeObserver vto = helperWnd.getViewTreeObserver();
vto.addOnGlobalLayoutListener(new OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
if (heightS == helperWnd.getHeight()) {
isFullScreen = true;
} else {
isFullScreen = false;
}
}
});
}
</code></pre>
<p>where widthS and heightS our global screen size;
Here I just compared invisible helper window height to screen height and make decision if status bar is visible. And do not forget to remove helperWnd in onDestroy of your Service.</p> |
7,985,599 | Notification of new S3 objects | <p>I have a scenario where we have many clients uploading to s3.</p>
<ul>
<li>What is the best approach to knowing that there is a new file?</li>
<li>Is it realistic/good idea, for me to poll the bucket ever few seconds?</li>
</ul> | 8,007,849 | 4 | 0 | null | 2011-11-02 18:37:41.377 UTC | 6 | 2022-08-31 00:02:26.743 UTC | 2018-01-18 11:49:49.43 UTC | null | 5,088,142 | null | 402,662 | null | 1 | 37 | amazon-s3|amazon | 21,489 | <p><strong>UPDATE:</strong></p>
<p>Since November 2014, S3 supports the following event notifications:</p>
<ul>
<li><code>s3:ObjectCreated:Put</code> β An object was created by an HTTP PUT operation.</li>
<li><code>s3:ObjectCreated:Post</code> β An object was created by HTTP POST operation.</li>
<li><code>s3:ObjectCreated:Copy</code> β An object was created an S3 copy operation.</li>
<li><code>s3:ObjectCreated:CompleteMultipartUpload</code> β An object was created by the completion of a S3 multi-part upload.</li>
<li><code>s3:ObjectCreated:*</code> β An object was created by one of the event types listed above or by a similar object creation event added in the future.</li>
<li><code>s3:ReducedRedundancyObjectLost</code> β An S3 object stored with Reduced Redundancy has been lost.</li>
</ul>
<p>These notifications can be issued to <a href="https://aws.amazon.com/sns/" rel="noreferrer">Amazon SNS</a>, <a href="https://aws.amazon.com/sqs/" rel="noreferrer">SQS</a> or <a href="https://aws.amazon.com/lambda/" rel="noreferrer">Lambda</a>. Check out the blog post that's linked in <a href="https://stackoverflow.com/a/26998552/1027148">Alan's answer</a> for more information on these new notifications.</p>
<p><strong>Original Answer:</strong></p>
<p>Although Amazon S3 has a bucket notifications system in place it does not support notifications for anything but the <em>s3:ReducedRedundancyLostObject</em> event (see the <em>GET Bucket notification</em> section in their API).</p>
<p>Currently the only way to check for new objects is to poll the bucket at a preset time interval or build your own notification logic in the upload clients (possibly based on Amazon SNS).</p> |
7,970,179 | Transposing a dataframe maintaining the first column as heading | <p>I have a big dataframe, but small example would be like this:</p>
<pre><code>mydf <- data.frame(A = c(letters[1:10]), M1 = c(11:20), M2 = c(31:40), M3 = c(41:50))
</code></pre>
<p>I want to transpose the dataframe and maintain the column 1 (A) as column heading ( letter[1:10]) as variable names. The following are scratch trials of unsuccessful codes. </p>
<pre><code>tmydf = data.frame(t(mydf))
names(tmydf) <- tmydf[1,]
</code></pre>
<p>Thanks;</p> | 7,970,267 | 5 | 1 | null | 2011-11-01 17:14:36.333 UTC | 15 | 2021-06-17 14:03:46.353 UTC | 2011-11-02 21:04:36.533 UTC | null | 429,846 | null | 927,589 | null | 1 | 35 | r|dataframe|transpose | 55,250 | <p>Here is one way</p>
<pre><code>tmydf = setNames(data.frame(t(mydf[,-1])), mydf[,1])
</code></pre> |
7,749,639 | How to get the difference in years from two different dates? | <p>I want to get the difference in years from two different dates using MySQL database. </p>
<p>for example:</p>
<ul>
<li>2011-07-20 - 2011-07-18 => 0 year</li>
<li>2011-07-20 - 2010-07-20 => 1 year</li>
<li>2011-06-15 - 2008-04-11 => <strike>2</strike> 3 years</li>
<li>2011-06-11 - 2001-10-11 => 9 years</li>
</ul>
<p>How about the SQL syntax? Is there any built in function from MySQL to produce the result?</p> | 7,749,665 | 7 | 0 | null | 2011-10-13 05:00:20.75 UTC | 10 | 2018-10-30 14:08:57.12 UTC | 2011-12-05 17:10:20.403 UTC | null | 246,131 | null | 246,131 | null | 1 | 51 | mysql|sql | 89,168 | <p>Here's the expression that also caters for leap years:</p>
<pre><code>YEAR(date1) - YEAR(date2) - (DATE_FORMAT(date1, '%m%d') < DATE_FORMAT(date2, '%m%d'))
</code></pre>
<p>This works because the expression <code>(DATE_FORMAT(date1, '%m%d') < DATE_FORMAT(date2, '%m%d'))</code> is <code>true</code> if date1 is "earlier in the year" than date2 <em>and</em> because in mysql, <code>true = 1</code> and <code>false = 0</code>, so the adjustment is simply a matter of subtracting the "truth" of the comparison.</p>
<p>This gives the correct values for your test cases, except for test #3 - I think it should be "3" to be consistent with test #1:</p>
<pre><code>create table so7749639 (date1 date, date2 date);
insert into so7749639 values
('2011-07-20', '2011-07-18'),
('2011-07-20', '2010-07-20'),
('2011-06-15', '2008-04-11'),
('2011-06-11', '2001-10-11'),
('2007-07-20', '2004-07-20');
select date1, date2,
YEAR(date1) - YEAR(date2)
- (DATE_FORMAT(date1, '%m%d') < DATE_FORMAT(date2, '%m%d')) as diff_years
from so7749639;
</code></pre>
<p>Output:</p>
<pre><code>+------------+------------+------------+
| date1 | date2 | diff_years |
+------------+------------+------------+
| 2011-07-20 | 2011-07-18 | 0 |
| 2011-07-20 | 2010-07-20 | 1 |
| 2011-06-15 | 2008-04-11 | 3 |
| 2011-06-11 | 2001-10-11 | 9 |
| 2007-07-20 | 2004-07-20 | 3 |
+------------+------------+------------+
</code></pre>
<p>See <a href="http://sqlfiddle.com/#!2/0ae42/1">SQLFiddle</a></p> |
8,015,313 | How to Programmatically Scroll a ScrollView to Bottom | <p>I've a problem I can't solve: inside a ScrollView I only have a
LinearLayout. By a user action I'm programmatically adding 2 TextView
on this LinearLayout, but by the default the scroll keeps on the top.
Since I controll the user action, I should be easy to scroll to the
bottom with something like:</p>
<pre><code>ScrollView scroll = (ScrollView) this.findViewById(R.id.scroll);
scroll.scrollTo(0, scroll.getBottom());
</code></pre>
<p>But actually not. Because immediately after adding this two new
elements getBottom() still returns the previous two. I tried to
refresh the state invoking <code>refreshDrawableState()</code>, but I doesn't work.</p>
<p>Do you have any idea how could I get the actual bottom of a ScrollView
after adding some elements?</p> | 10,459,092 | 7 | 3 | null | 2011-11-04 20:23:28.52 UTC | 7 | 2022-02-08 13:28:26.833 UTC | 2013-11-14 13:06:31.483 UTC | null | 675,552 | null | 1,030,387 | null | 1 | 58 | java|android|android-layout|android-scrollview | 80,703 | <p>This doesn't actually answer your question.
But it's an alternative which pretty much does the same thing.</p>
<p>Instead of Scrolling to the bottom of the screen, change the focus to a view which is located at the bottom of the screen. </p>
<p>That is, Replace:</p>
<pre><code>scroll.scrollTo(0, scroll.getBottom());
</code></pre>
<p>with:</p>
<pre><code>Footer.requestFocus();
</code></pre>
<p>Make sure you specify that the view, say 'Footer' is focusable.</p>
<pre><code>android:focusable="true"
android:focusableInTouchMode="true"
</code></pre> |
8,207,655 | Get time of specific timezone | <p>I am using a JavaScript <code>Date</code> class & trying to get the current date using <code>getDate()</code> method. But obviously it is loading system date & time. I am running the code from India but I want to get the date & time of UK using the same method. How can I do that ?</p> | 8,207,708 | 9 | 2 | null | 2011-11-21 05:28:18.897 UTC | 35 | 2022-04-13 08:29:56.267 UTC | 2018-08-29 22:37:00.04 UTC | null | 3,345,644 | null | 451,959 | null | 1 | 88 | javascript|date|time|timezone | 286,642 | <p>If you know the UTC offset then you can pass it and get the time using the following function:</p>
<pre><code>function calcTime(city, offset) {
// create Date object for current location
var d = new Date();
// convert to msec
// subtract local time zone offset
// get UTC time in msec
var utc = d.getTime() + (d.getTimezoneOffset() * 60000);
// create new Date object for different city
// using supplied offset
var nd = new Date(utc + (3600000*offset));
// return time as a string
return "The local time for city"+ city +" is "+ nd.toLocaleString();
}
alert(calcTime('Bombay', '+5.5'));
</code></pre>
<p>Taken from: <a href="https://www.techrepublic.com/article/convert-the-local-time-to-another-time-zone-with-this-javascript/" rel="noreferrer">Convert Local Time to Another</a></p> |
7,851,134 | Syntax highlighting/colorizing cat | <p>Is there a method to colorize the output of <code>cat</code>, the way <code>grep</code> does.</p>
<p>For <code>grep</code>, in most consoles it displays a colored output highlighting the searched keywords. Otherwise, you can force it by calling <code>grep --color</code>
Is there a generic way to color the output of any program according to your personal choice.</p>
<p>From what I understand, the program itself is not responsible for the colors. It is the shell. </p>
<p>I am using the default shell in FreeBSD 5.2.1 which looks like it has never seen colors since epoch.</p> | 7,855,793 | 20 | 8 | null | 2011-10-21 14:47:59.013 UTC | 83 | 2022-09-18 21:02:22.573 UTC | 2014-08-10 16:59:00.7 UTC | null | 3,405,122 | null | 157,880 | null | 1 | 260 | unix|syntax-highlighting|color-scheme | 131,748 | <p><code>cat</code> with syntax highlighting is simply out of scope. <code>cat</code> is not meant for that.
If you just want to have the entire content of some file coloured in some way (with the same colour for the whole file), you can make use of terminal escape sequences to control the color.</p>
<p>Here's a sample script that will choose the colour based on the file type (you can use something like this instead of invoking <code>cat</code> directly):</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
fileType="$(file "$1" | grep -o 'text')"
if [ "$fileType" == 'text' ]; then
echo -en "\033[1m"
else
echo -en "\033[31m"
fi
cat $1
echo -en "\033[0m"
</code></pre>
<p>The above (on a terminal that supports those escape sequences) will print any text file as 'bold', and will print any binary file as red. You can use <code>strings</code> instead of <code>cat</code> for printing binary files and you can enhance the logic to make it suit your needs.</p> |
4,479,800 | Python generate dates series | <p>How can i generate array with dates like this:</p>
<p>Timestamps in javascript miliseconds format from 2010.12.01 00:00:00 to 2010.12.12.30 23.59.59
with step 5 minutes.</p>
<pre><code>['2010.12.01 00:00:00', '2010.12.01 00:05:00','2010.12.01 00:10:00','2010.12.01 00:15:00', ...]
</code></pre> | 4,479,842 | 3 | 0 | null | 2010-12-18 19:42:20.933 UTC | 8 | 2018-03-06 11:03:25.653 UTC | 2010-12-18 19:50:12.967 UTC | null | 464,492 | null | 464,492 | null | 1 | 36 | python|datetime|timestamp | 53,788 | <p>Well, obviously you start at the start time, loop until you reach the end time and increment inbetween.</p>
<pre><code>import datetime
dt = datetime.datetime(2010, 12, 1)
end = datetime.datetime(2010, 12, 30, 23, 59, 59)
step = datetime.timedelta(seconds=5)
result = []
while dt < end:
result.append(dt.strftime('%Y-%m-%d %H:%M:%S'))
dt += step
</code></pre>
<p>Fairly trivial.</p> |
4,673,166 | python httplib Name or service not known | <p>I'm trying to use httplib to send credit card information to authorize.net. When i try to post the request, I get the following traceback:</p>
<pre><code>File "./lib/cgi_app.py", line 139, in run res = method()
File "/var/www/html/index.py", line 113, in ProcessRegistration conn.request("POST", "/gateway/transact.dll", mystring, headers)
File "/usr/local/lib/python2.7/httplib.py", line 946, in request self._send_request(method, url, body, headers)
File "/usr/local/lib/python2.7/httplib.py", line 987, in _send_request self.endheaders(body)
File "/usr/local/lib/python2.7/httplib.py", line 940, in endheaders self._send_output(message_body)
File "/usr/local/lib/python2.7/httplib.py", line 803, in _send_output self.send(msg)
File "/usr/local/lib/python2.7/httplib.py", line 755, in send self.connect()
File "/usr/local/lib/python2.7/httplib.py", line 1152, in connect self.timeout, self.source_address)
File "/usr/local/lib/python2.7/socket.py", line 567, in create_connection raise error, msg
gaierror: [Errno -2] Name or service not known
</code></pre>
<p>I build my request like so:</p>
<pre><code>mystring = urllib.urlencode(cardHash)
headers = {"Content-Type": "text/xml", "Content-Length": str(len(mystring))}
conn = httplib.HTTPSConnection("secure.authorize.net:443", source_address=("myurl.com", 443))
conn.request("POST", "/gateway/transact.dll", mystring, headers)
</code></pre>
<p>to add another layer to this, it was working on our development server which has httplib 2.6 and without the source_address parameter in httplib.HTTPSConnection.</p>
<p>Any help is greatly appreciated.</p>
<p>===========================================================</p>
<p>EDIT:</p>
<p>I can run it from command line. Apparently this is some sort of permissions issue. Any ideas what permissions I would need to grant to which users to make this happen? Possibly Apache can't open the port?</p> | 4,681,560 | 4 | 3 | null | 2011-01-12 19:33:25.91 UTC | 5 | 2016-05-18 19:08:57.207 UTC | 2011-01-12 20:58:45.977 UTC | null | 355,689 | null | 355,689 | null | 1 | 9 | python|ssl|httplib | 54,266 | <p>The problem ultimately came down to the fact that selinux was stopping apache from getting that port. Disabling selinux fixed the problems. I had an issue later where i didn't have /var/www/.python-eggs/, so MySQLdb was hosing on import. But after a mkdir, it was fixed.</p> |
4,674,006 | Set specific bit in byte | <p>I'm trying to set bits in Java byte variable. It does provide propper methods like <code>.setBit(i)</code>. Does anybody know how I can realize this?</p>
<p>I can iterate bit-wise through a given byte:</p>
<pre><code>if( (my_byte & (1 << i)) == 0 ){
}
</code></pre>
<p>However I cannot set this position to 1 or 0, can I?</p> | 4,674,055 | 5 | 4 | null | 2011-01-12 21:00:05.693 UTC | 32 | 2019-12-09 02:41:27.913 UTC | null | null | null | null | 59,300 | null | 1 | 85 | java|byte|bit-manipulation | 92,616 | <p>Use the bitwise <strong>OR</strong> (<code>|</code>) and <strong>AND</strong> (<code>&</code>) operators. To set a bit, namely turn the bit at <code>pos</code> to <code>1</code>:</p>
<pre class="lang-java prettyprint-override"><code>my_byte = my_byte | (1 << pos); // longer version, or
my_byte |= 1 << pos; // shorthand
</code></pre>
<p>To un-set a bit, or turn it to <code>0</code>:</p>
<pre class="lang-java prettyprint-override"><code>my_byte = my_byte & ~(1 << pos); // longer version, or
my_byte &= ~(1 << pos); // shorthand
</code></pre>
<p>For examples, see <a href="https://en.wikiversity.org/wiki/Advanced_Java/Bitwise_Operators" rel="noreferrer">Advanced Java/Bitwise Operators</a></p> |
4,574,318 | How do I upgrade my ruby 1.9.2-p0 to the latest patch level using rvm? | <p>My current version of ruby is <code>ruby 1.9.2p0 (2010-08-18 revision 29036) [x86_64-darwin10.5.0]</code> but I want to update it to the latest patch level using rvm. How can I do this?</p> | 4,574,388 | 8 | 0 | null | 2011-01-01 14:16:55.377 UTC | 41 | 2013-12-05 08:22:19.467 UTC | null | null | null | null | 527,727 | null | 1 | 118 | ruby-on-rails|ruby|rvm | 63,193 | <p>First of all, update your RVM installation by running <code>rvm get stable</code>.</p>
<p>To make sure you're running the new RVM version, you'll then need to run <code>rvm reload</code> (or just open a new terminal).</p>
<p>Once that's done, you can ask RVM to list the ruby versions available to install by running <code>rvm list known</code>.</p>
<p>In the output you should now see:</p>
<pre><code># MRI Rubies
...
[ruby-]1.9.2[-p320]
...
</code></pre>
<p>The square brackets around the patch level indicate that this is currently RVM's default patch level for ruby 1.9.2.</p>
<p>Finally, to install the new ruby version, just run <code>rvm install 1.9.2</code> - and wait for it to compile!</p> |
4,457,790 | Difference between style = "position:absolute" and style = "position:relative" | <p>Can any one tell me the Difference between <code>style = "position:absolute"</code> and <code>style = "position:relative"</code> and how they differ in case I add it to <code>div</code>/<code>span</code>/<code>input</code> elements?</p>
<p>I am using <code>absolute</code> right now, but I want to explore <code>relative</code> as well. How will this change the positioning?</p> | 4,457,821 | 10 | 4 | null | 2010-12-16 05:42:17.757 UTC | 49 | 2021-05-23 15:32:13.603 UTC | 2015-08-12 14:24:56.717 UTC | null | 1,079,075 | null | 519,755 | null | 1 | 106 | css|css-position | 218,880 | <p>Absolute positioning means that the element is taken completely out of the normal flow of the page layout. As far as the rest of the elements on the page are concerned, the absolutely positioned element simply doesn't exist. The element itself is then drawn separately, sort of "on top" of everything else, at the position you specify using the <code>left, right, top and bottom</code> attributes. </p>
<p>Using the position you specify with these attributes, the element is then placed at that position within its last ancestor element which has a position attribute of anything other than <code>static</code> (page elements default to static when no position attribute specified), or the document body (browser viewport) if no such ancestor exists.</p>
<p>For example, if I had this code:</p>
<pre><code><body>
<div style="position:absolute; left: 20px; top: 20px;"></div>
</body>
</code></pre>
<p>...the <code><div></code> would be positioned 20px from the top of the browser viewport, and 20px from the left edge of same.</p>
<p>However, if I did something like this:</p>
<pre><code> <div id="outer" style="position:relative">
<div id="inner" style="position:absolute; left: 20px; top: 20px;"></div>
</div>
</code></pre>
<p>...then the <code>inner</code> div would be positioned 20px from the top of the <code>outer</code> div, and 20px from the left edge of same, because the <code>outer</code> div isn't positioned with <code>position:static</code> because we've explicitly set it to use <code>position:relative</code>.</p>
<p>Relative positioning, on the other hand, is just like stating no positioning at all, but the <code>left, right, top and bottom</code> attributes "nudge" the element out of their normal layout. The rest of the elements on the page still get laid out as if the element was in its normal spot though.</p>
<p>For example, if I had this code:</p>
<pre><code><span>Span1</span>
<span>Span2</span>
<span>Span3</span>
</code></pre>
<p>...then all three <code><span></code> elements would sit next to each other without overlapping.</p>
<p>If I set the second <code><span></code> to use relative positioning, like this:</p>
<pre><code><span>Span1</span>
<span style="position: relative; left: -5px;">Span2</span>
<span>Span3</span>
</code></pre>
<p>...then Span2 would overlap the right side of Span1 by 5px. Span1 and Span3 would sit in exactly the same place as they did in the first example, leaving a 5px gap between the right side of Span2 and the left side of Span3.</p>
<p>Hope that clarifies things a bit.</p> |
4,348,351 | UISearchBar disable auto disable of cancel button | <p>I have implemented a UISearchBar into a table view and almost everything is working except one small thing: When I enter text and then press the search button on the keyboard, the keyboard goes away, the search results are the only items shown in the table, the text stays in the UISearchBar, but the cancel button gets disabled.</p>
<p>I have been trying to get my list as close to the functionality of the Apple contacts app and when you press search in that app, it doesn't disable the cancel button.</p>
<p>When I looked in the UISearchBar header file, I noticed a flag for autoDisableCancelButton under the _searchBarFlags struct but it is private.</p>
<p>Is there something that I am missing when I setup the UISearchBar?</p> | 4,349,003 | 11 | 0 | null | 2010-12-03 17:51:23.887 UTC | 8 | 2020-06-17 05:25:56.433 UTC | null | null | null | null | 284,757 | null | 1 | 37 | ios4|uisearchbar|cancel-button | 16,121 | <p>I found a solution. You can use this for-loop to loop over the subviews of the search bar and enable it when the search button is pressed on the keyboard.</p>
<pre class="lang-c prettyprint-override"><code>for (UIView *possibleButton in searchBar.subviews)
{
if ([possibleButton isKindOfClass:[UIButton class]])
{
UIButton *cancelButton = (UIButton*)possibleButton;
cancelButton.enabled = YES;
break;
}
}
</code></pre> |
14,859,266 | Input autofocus attribute | <p>I have places in my code where I have this:</p>
<pre><code><input data-ng-disabled="SOME_SCOPE_VARIABLE" />
</code></pre>
<p>I would like to be able to use it like this too:</p>
<pre><code><input data-ng-autofocus="SOME_SCOPE_VARIABLE" />
</code></pre>
<p>Or even better, mimicking how ng-style is done:</p>
<pre><code><input data-ng-attribute="{autofocus: SOME_SCOPE_VARIABLE}" />
</code></pre>
<p>Does this exist in the current version of AngularJS? I noticed in the code there's a BOOLEAN_ATTR which gets all the attr's that AngularJS supports. I don't want to modify that in fear of changing versions and forgetting to update.</p> | 14,859,639 | 10 | 0 | null | 2013-02-13 17:23:19.987 UTC | 9 | 2020-08-04 13:40:36.787 UTC | 2016-01-15 03:20:54.813 UTC | null | 2,025,923 | null | 973,651 | null | 1 | 26 | angularjs | 76,519 | <p><strong>Update</strong>: AngularJS now has an <a href="http://docs.angularjs.org/api/ng/directive/ngFocus" rel="noreferrer"><code>ngFocus</code></a> directive that evaluates an expression <em>on</em> focus, but I mention it here for the sake of completeness.</p>
<hr>
<p>The current version of AngularJS doesn't have a focus directive, but it's in the roadmap. Coincidentally, we were <a href="https://groups.google.com/d/msg/angular/lczCSTPMup0/qj1uMqYYdloJ" rel="noreferrer">talking about this</a> on the mailing list yesterday, and I came up with this:</p>
<pre class="lang-js prettyprint-override"><code>angular.module('ng').directive('ngFocus', function($timeout) {
return {
link: function ( scope, element, attrs ) {
scope.$watch( attrs.ngFocus, function ( val ) {
if ( angular.isDefined( val ) && val ) {
$timeout( function () { element[0].focus(); } );
}
}, true);
element.bind('blur', function () {
if ( angular.isDefined( attrs.ngFocusLost ) ) {
scope.$apply( attrs.ngFocusLost );
}
});
}
};
});
</code></pre>
<p>Which works off a scope variable as you requested:</p>
<pre class="lang-js prettyprint-override"><code><input type="text" ng-focus="isFocused" ng-focus-lost="loseFocus()">
</code></pre>
<p>Here's a fiddle: <a href="http://jsfiddle.net/ANfJZ/39/" rel="noreferrer">http://jsfiddle.net/ANfJZ/39/</a></p> |
14,647,723 | Django Forms: if not valid, show form with error message | <p>In Django forms, it can check whether the form is valid:</p>
<pre><code>if form.is_valid():
return HttpResponseRedirect('/thanks/')
</code></pre>
<p>But I'm missing what to do if it isn't valid? How do I return the form with the error messages? I'm not seeing the "else" in any of the examples.</p> | 14,647,770 | 8 | 0 | null | 2013-02-01 13:48:35.5 UTC | 40 | 2021-02-07 19:34:49.427 UTC | 2020-11-28 20:28:54.63 UTC | null | 2,429,989 | null | 984,003 | null | 1 | 139 | django|django-forms | 207,107 | <p>If you render the same view when the form is not valid then in template you can <a href="https://docs.djangoproject.com/en/2.2/ref/forms/api/#django.forms.Form.errors" rel="noreferrer">access the form errors using <code>form.errors</code></a>.</p>
<pre><code>{% if form.errors %}
{% for field in form %}
{% for error in field.errors %}
<div class="alert alert-danger">
<strong>{{ error|escape }}</strong>
</div>
{% endfor %}
{% endfor %}
{% for error in form.non_field_errors %}
<div class="alert alert-danger">
<strong>{{ error|escape }}</strong>
</div>
{% endfor %}
{% endif %}
</code></pre>
<p>An example:</p>
<pre><code>def myView(request):
form = myForm(request.POST or None, request.FILES or None)
if request.method == 'POST':
if form.is_valid():
return HttpResponseRedirect('/thanks/')
return render(request, 'my_template.html', {'form': form})
</code></pre> |
49,754,286 | Multiple images, one Dockerfile | <p>How to create two images in one Dockerfile, they only copy different files.</p>
<p>Shouldn't this produce two images <strong>img1</strong> & <strong>img2</strong>, instead it produces two unnamed images <strong>d00a6fc336b3</strong> & <strong>a88fbba7eede</strong></p>
<p>Dockerfile:</p>
<pre><code>FROM alpine as img1
COPY file1.txt .
FROM alpine as img2
COPY file2.txt .
</code></pre>
<p>Instead this is the result of <em>docker build .</em></p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> d00a6fc336b3 4 seconds ago 4.15 MB
<none> <none> a88fbba7eede 5 seconds ago 4.15 MB
alpine latest 3fd9065eaf02 3 months ago 4.15 MB
</code></pre> | 49,754,996 | 2 | 1 | null | 2018-04-10 12:54:56.983 UTC | 6 | 2020-08-25 20:05:47.597 UTC | null | null | null | null | 2,331,649 | null | 1 | 32 | docker|dockerfile | 33,143 | <p>You can use a <a href="https://docs.docker.com/compose/compose-file/" rel="noreferrer"><code>docker-compose</code></a> file using the <a href="https://docs.docker.com/compose/compose-file/#target" rel="noreferrer"><code>target</code></a> option:</p>
<pre><code>version: '3.4'
services:
img1:
build:
context: .
target: img1
img2:
build:
context: .
target: img2
</code></pre>
<p>using your <code>Dockerfile</code> with the following content:</p>
<pre><code>FROM alpine as img1
COPY file1.txt .
FROM alpine as img2
COPY file2.txt .
</code></pre> |
38,661,090 | Token based authentication in Web API without any user interface | <p>I am developing a REST API in ASP.Net Web API. My API will be only accessible via non-browser based clients. I need to implement security for my API so I decided to go with Token based authentication. I have a fair understanding of token based authentication and have read a few tutorials, but they all have some user interface for login. I don't need any UI for login as the login details will be passed by the client through HTTP POST which will be authorized from our database. How can I implement token based authentication in my API? Please note- my API will be accessed in high frequency so I also have to take care of performance.
Please let me know if I can explain it any better.</p> | 38,670,221 | 2 | 9 | null | 2016-07-29 14:17:04.857 UTC | 48 | 2017-11-02 07:50:56.88 UTC | 2017-11-02 07:50:56.88 UTC | null | 961,095 | null | 5,014,099 | null | 1 | 70 | c#|.net|authentication|asp.net-web-api|http-token-authentication | 177,474 | <p>I think there is some confusion about the difference between MVC and Web Api. In short, for MVC you can use a login form and create a session using cookies. For Web Api there is no session. That's why you want to use the token.</p>
<p>You do not need a login form. The Token endpoint is all you need. Like Win described you'll send the credentials to the token endpoint where it is handled.</p>
<p>Here's some client side C# code to get a token:</p>
<pre><code> //using System;
//using System.Collections.Generic;
//using System.Net;
//using System.Net.Http;
//string token = GetToken("https://localhost:<port>/", userName, password);
static string GetToken(string url, string userName, string password) {
var pairs = new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>( "grant_type", "password" ),
new KeyValuePair<string, string>( "username", userName ),
new KeyValuePair<string, string> ( "Password", password )
};
var content = new FormUrlEncodedContent(pairs);
ServicePointManager.ServerCertificateValidationCallback += (sender, cert, chain, sslPolicyErrors) => true;
using (var client = new HttpClient()) {
var response = client.PostAsync(url + "Token", content).Result;
return response.Content.ReadAsStringAsync().Result;
}
}
</code></pre>
<p>In order to use the token add it to the header of the request:</p>
<pre><code> //using System;
//using System.Collections.Generic;
//using System.Net;
//using System.Net.Http;
//var result = CallApi("https://localhost:<port>/something", token);
static string CallApi(string url, string token) {
ServicePointManager.ServerCertificateValidationCallback += (sender, cert, chain, sslPolicyErrors) => true;
using (var client = new HttpClient()) {
if (!string.IsNullOrWhiteSpace(token)) {
var t = JsonConvert.DeserializeObject<Token>(token);
client.DefaultRequestHeaders.Clear();
client.DefaultRequestHeaders.Add("Authorization", "Bearer " + t.access_token);
}
var response = client.GetAsync(url).Result;
return response.Content.ReadAsStringAsync().Result;
}
}
</code></pre>
<p>Where Token is:</p>
<pre><code>//using Newtonsoft.Json;
class Token
{
public string access_token { get; set; }
public string token_type { get; set; }
public int expires_in { get; set; }
public string userName { get; set; }
[JsonProperty(".issued")]
public string issued { get; set; }
[JsonProperty(".expires")]
public string expires { get; set; }
}
</code></pre>
<p>Now for the server side:</p>
<p>In Startup.Auth.cs</p>
<pre><code> var oAuthOptions = new OAuthAuthorizationServerOptions
{
TokenEndpointPath = new PathString("/Token"),
Provider = new ApplicationOAuthProvider("self"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
// https
AllowInsecureHttp = false
};
// Enable the application to use bearer tokens to authenticate users
app.UseOAuthBearerTokens(oAuthOptions);
</code></pre>
<p>And in ApplicationOAuthProvider.cs the code that actually grants or denies access:</p>
<pre><code>//using Microsoft.AspNet.Identity.Owin;
//using Microsoft.Owin.Security;
//using Microsoft.Owin.Security.OAuth;
//using System;
//using System.Collections.Generic;
//using System.Security.Claims;
//using System.Threading.Tasks;
public class ApplicationOAuthProvider : OAuthAuthorizationServerProvider
{
private readonly string _publicClientId;
public ApplicationOAuthProvider(string publicClientId)
{
if (publicClientId == null)
throw new ArgumentNullException("publicClientId");
_publicClientId = publicClientId;
}
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
var userManager = context.OwinContext.GetUserManager<ApplicationUserManager>();
var user = await userManager.FindAsync(context.UserName, context.Password);
if (user == null)
{
context.SetError("invalid_grant", "The user name or password is incorrect.");
return;
}
ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager);
var propertyDictionary = new Dictionary<string, string> { { "userName", user.UserName } };
var properties = new AuthenticationProperties(propertyDictionary);
AuthenticationTicket ticket = new AuthenticationTicket(oAuthIdentity, properties);
// Token is validated.
context.Validated(ticket);
}
public override Task TokenEndpoint(OAuthTokenEndpointContext context)
{
foreach (KeyValuePair<string, string> property in context.Properties.Dictionary)
{
context.AdditionalResponseParameters.Add(property.Key, property.Value);
}
return Task.FromResult<object>(null);
}
public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
{
// Resource owner password credentials does not provide a client ID.
if (context.ClientId == null)
context.Validated();
return Task.FromResult<object>(null);
}
public override Task ValidateClientRedirectUri(OAuthValidateClientRedirectUriContext context)
{
if (context.ClientId == _publicClientId)
{
var expectedRootUri = new Uri(context.Request.Uri, "/");
if (expectedRootUri.AbsoluteUri == context.RedirectUri)
context.Validated();
}
return Task.FromResult<object>(null);
}
}
</code></pre>
<p>As you can see there is no controller involved in retrieving the token. In fact, you can remove all MVC references if you want a Web Api only. I have simplified the server side code to make it more readable. You can add code to upgrade the security.</p>
<p>Make sure you use SSL only. Implement the RequireHttpsAttribute to force this.</p>
<p>You can use the Authorize / AllowAnonymous attributes to secure your Web Api. Additionally you can add filters (like RequireHttpsAttribute) to make your Web Api more secure. I hope this helps.</p> |
22,942,479 | Authentication / Authorization MVC 5 and Web API - Katana/Owin | <p>I'm having problems trying to decide on a route to take on a project I have.</p>
<p>I've been reading up on OWIN specs and Katana implementation within .NET. The reason why I'd like to go with the Katana route is because of the owin components associated with ADFS and Token/Cookie generation.</p>
<p>I have two projects, one for MVC 5 website, and one for Web API. They may rest of two separate servers in the future, but for now they are on the same.</p>
<p>I know I will be using IIS, so the Owin pipeline isn't necessary for me to investigate.</p>
<p>The requirements I have is that there will be users that will be logging in using ADFS, and other users who will be logging in using Token/Cookie generation, with Role/Membership providers. Based on who is authenticated, certain sections of my web page will be exposed. The webpage enginer is done in razor.</p>
<p>Does anyone have any material that I can read through to help explain a design flow I can take? Or anyone has done a project similar to what I'm going through that can add any advice? There's a lot of disparate documentations that describe specific things that I need, but not the big picture; like only talking about WebAPI and ADFS, or WebAPI and windows azure, etc etc.</p>
<p>My theory is to implement authentication/authorization on the MVC5 website project, authorization on the Web API (somehow communication between the two needs to exist). I then maybe create a copy of the project for ADFS and another copy for Token/cookie authentication? Or maybe I'd have to make 4 different kinds of authentications: 2 for adfs where I authenticate against the MVC5 website and Web API, and again another 2 for token/cookie generation.</p>
<p>Any suggestions would be helpful as I'm not very familiar with this kind of technology.</p> | 23,522,431 | 2 | 0 | null | 2014-04-08 16:10:07.903 UTC | 8 | 2015-04-10 16:12:05.43 UTC | 2014-04-08 18:38:45.283 UTC | null | 1,449,587 | null | 1,449,587 | null | 1 | 8 | asp.net-web-api|owin|adfs|katana | 5,065 | <p>I can offer that the WsFederation option in OWIN is nice but requires cookies...and they're a different kind of cookie than local auth with cookies. ADFS 2.0/WsFederation uses AuthenticationType="Cookies", and local auth uses AuthenticationType="ApplicationCookie". They are apparently incompatible as far as I can tell. I think you'll have to use token auth for ADFS but I believe that requires ADFS 3.0 on 2012R2. For that use OWIN OAuth.</p>
<p>UPDATE: after working on this for a while, I've figured out how to get these two authentication types to coexist peacefully in the same web application. Using OWIN, set up to call UseCookieAuthentication TWICE, once to enable the new WsFederationAuthentication middleware, and again to enable local cookie authentication. It's not intuitive but behind the scenes, specifying different authentication types for each sets them up as different auth "engines". Here's how it looks in my Startup:</p>
<pre><code>app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString("/Account/Login"),
Provider = new CookieAuthenticationProvider
{
OnResponseSignIn = ctx =>
{
ctx.Identity = TransformClaims(ctx, app);
}
}
});
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
Provider = new CookieAuthenticationProvider
{
OnResponseSignIn = ctx =>
{
ctx.Identity = TransformClaims(ctx, app);
}
}
});
app.UseWsFederationAuthentication(new WsFederationAuthenticationOptions
{
Wtrealm = Realm,
MetadataAddress = Metadata,
Caption = "Active Directory",
SignInAsAuthenticationType = CookieAuthenticationDefaults.AuthenticationType
});
</code></pre>
<p>This is successfully allowing users to authenticate to either local SQL tables or to ADFS 2.0. The TransformClaims callout is allowing me to normalize my claims between these two providers so they are consistent.</p>
<p>EDIT: Here's a very rudimentary TransformClaims. You can do a lot of things inside this: get the user from your DB, setup claims for navigation, custom permissions, role collections, whatever. I just built this slimmed-down version from a much larger implementation so I've not run it but hopefully you get the idea of how to leverage the OnResponseSignIn event.</p>
<pre><code>private static ClaimsIdentity TransformClaims(CookieResponseSignInContext ctx, IAppBuilder app)
{
var ident = ctx.Identity;
var claimEmail = ident.Claims.SingleOrDefault(c => c.Type == ClaimTypes.Email);
var claimName = ident.Claims.SingleOrDefault(c => c.Type == ClaimTypes.Name);
//normalize my string identifier
var loginString = (claimEmail != null) ? claimEmail.Value : (claimName != null) ? claimName.Value : null;
var efctx = ctx.OwinContext.Get<DBEntities>();
var user = UserBL.GetActiveUserByEmailOrName(efctx, loginString);
if (user == null)
{
//user was auth'd by ADFS but hasn't been auth'd by this app
ident.AddClaim(new Claim(ClaimTypesCustom.Unauthorized, "true"));
return ident;
}
if (ident.Claims.First().Issuer == "LOCAL AUTHORITY")
{
//Local
//local already has claim type "Name"
//local didn't have claim type "Email" - adding it
ident.AddClaim(new Claim(ClaimTypes.Email, user.Email));
}
else
{
//ADFS
//ADFS already has claim type "Email"
//ADFS didn't have claim type "Name" - adding it
ident.SetClaim(ClaimTypes.Name, user.UserName);
}
//now ident has "Name" and "Email", regardless of where it came from
return ident;
}
</code></pre> |
41,919,866 | How to write a rule to prevent any deletion of node from database | <p>I am trying to write rules to secure database. But I am confused on writing a rule which will prevent from deleting any node from database. I have read regarding <code>newData.exists</code>but when I tried running it in simulator deletion was succeeded! As a node can be deleted by setting its value to null, so I tried simulating the value of node to null and it was successful, which was not desired. </p>
<p>Suppose I have this node:</p>
<pre><code>root{
Number of Users:20
}
</code></pre>
<p>And I wrote these rules:</p>
<pre><code>"Number of Users":{
".read":true,
".write":"auth!==null && newData.exists()"
}
</code></pre>
<p>Am I making any mistake, please correct me.</p> | 41,922,261 | 1 | 0 | null | 2017-01-29 10:28:18.103 UTC | 10 | 2017-01-29 15:26:35.54 UTC | null | null | null | null | 6,627,740 | null | 1 | 13 | firebase-realtime-database|firebase-security | 5,572 | <p>To allow adding new nodes, but prevent deleting or overwriting any node:</p>
<pre><code>".write": "!data.exists()"
</code></pre>
<p>To allow adding and overwriting, but not deleting, any node:</p>
<pre><code>".write": "newData.exists()"
</code></pre>
<p><strong>Update: screenshot of the simulator for these rules</strong>
<a href="https://i.stack.imgur.com/oj0oK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oj0oK.png" alt="Cannot write null"></a></p> |
3,097,779 | decltype and parentheses | <p>I don't understand the last line of the example on page 148 of the FCD (Β§7.6.1.2/4):</p>
<pre><code>const int&& foo();
int i;
struct A { double x; };
const A* a = new A();
decltype(foo()) x1 = i; // type is const int&&
decltype(i) x2; // type is int
decltype(a->x) x3; // type is double
decltype((a->x)) x4 = x3; // type is const double&
</code></pre>
<p>Why do the parentheses make a difference here? Shouldn't it simply be <code>double</code> like in the line above?</p> | 3,097,803 | 3 | 1 | null | 2010-06-22 23:03:48.71 UTC | 21 | 2014-06-09 06:40:54.383 UTC | 2014-06-09 06:40:54.383 UTC | null | 819,272 | null | 252,000 | null | 1 | 53 | c++|c++11|type-inference|decltype | 4,380 | <p>Just above that example, it says</p>
<blockquote>
<ul>
<li>if e is an unparenthesized id-expression or a class member access (5.2.5), decltype(e) is the type of the entity named by e.</li>
<li>if e is an lvalue, decltype(e) is T&, where T is the type of e;</li>
</ul>
</blockquote>
<p>I think <code>decltype(a->x)</code> is an example of the "class member access" and <code>decltype((a->x))</code> is an example of lvalue.</p> |
2,453,308 | Monitoring used connections on mysql to debug 'too many connections' | <p>On LAMP production server I get the 'too many connections' error from MYSQL occasionally, I want to add monitoring to find if the reason is that I exceed the max-connections limit. </p>
<p>My question: How can I query from mysql or from mysqladmin the current number of used connections?
(I noticed that show status gives total connections and not the currently used ones.)</p> | 2,453,899 | 4 | 0 | null | 2010-03-16 09:51:33.587 UTC | 4 | 2016-07-05 14:37:01.297 UTC | null | null | null | null | 64,106 | null | 1 | 21 | mysql | 49,138 | <p>A very powerful tool to monitor MySQL is <code>innotop</code>. You can find it here: </p>
<p><a href="https://github.com/innotop/innotop" rel="noreferrer">https://github.com/innotop/innotop</a></p>
<p>In Debian Lenny, it is part of the package mysql-client-5.0 and I guess it is available for other distros as well. It is especially powerful in monitoring InnoDB internals, but provides general monitoring of the server as well.</p>
<p>In "Variables & Status" mode, it monitors the variables "Connections" and "Max_used_Connections" (among others). It displays absolute and incremental values - the latter might give you an idea about the current connections. </p>
<p>Since innotop provides a non-interactive mode, you can easily build fully automated monitoring by calling innotop from some customized scripts, nagios checks or whatever system you have. </p> |
2,751,935 | Moq.Mock<T> - how to set up a method that takes an expression | <p>I am Mocking my repository interface and am not sure how to set up a method that takes an expression and returns an object? I am using Moq and NUnit.</p>
<p>Interface:</p>
<pre><code>public interface IReadOnlyRepository : IDisposable
{
IQueryable<T> All<T>() where T : class;
T Single<T>(Expression<Func<T, bool>> expression) where T : class;
}
</code></pre>
<p>Test with IQueryable is already set up, but don't know how to set up the T Single:</p>
<pre><code>private Moq.Mock<IReadOnlyRepository> _mockRepos;
private AdminController _controller;
[SetUp]
public void SetUp()
{
var allPages = new List<Page>();
for (var i = 0; i < 10; i++)
{
allPages.Add(new Page { Id = i, Title = "Page Title " + i, Slug = "Page-Title-" + i, Content = "Page " + i + " on page content." });
}
_mockRepos = new Moq.Mock<IReadOnlyRepository>();
_mockRepos.Setup(x => x.All<Page>()).Returns(allPages.AsQueryable());
//Not sure what to do here???
_mockRepos.Setup(x => x.Single<Page>()
//----
_controller = new AdminController(_mockRepos.Object);
}
</code></pre> | 2,751,981 | 4 | 0 | null | 2010-05-01 23:36:21.643 UTC | 15 | 2019-07-28 17:58:02.013 UTC | 2019-07-28 17:58:02.013 UTC | null | 964,243 | null | 187,350 | null | 1 | 31 | c#|unit-testing|nunit|moq | 24,508 | <p>You can set it up like this:</p>
<pre><code>_mockRepos.Setup(x => x.Single<Page>(It.IsAny<Expression<Func<Page, bool>>>()))//.Returns etc...;
</code></pre>
<p>However you are coming up against one of Moq's shortcomings. You would want to put an actual expression there instead of using <code>It.IsAny</code>, but Moq doesn't support setting up methods that take expressions with specific expressions (it's a difficult feature to implement). The difficulty comes from having to figure out whether two expressions are equivalent.</p>
<p>So in your test you can pass in <em>any</em> <code>Expression<Func<Page,bool>></code> and it will pass back whatever you have setup the mock to return. The value of the test is a little diluted. </p> |
29,247,032 | angular ui-grid event: row selected | <p>I am trying to enable/disable a button based on the selection of a row on a ui-grid. If there are no rows selected, the button is disabled.</p>
<p>I found this <a href="http://plnkr.co/edit/fNemOQ?p=preview">plunkr</a> with the old ng-grid way of firing an event after a row is selected. </p>
<pre><code> $scope.gridOptions = {
data: 'myData',
selectedItems: $scope.selections,
enableRowSelection: true,
afterSelectionChange:function() {
if ($scope.selections != "" ) {
$scope.disabled = false;
} else {
$scope.disabled = true;
}
}
};
</code></pre>
<p>Unfortunately it does not work, and I have found no sign of such event in the ui-grid <a href="http://ui-grid.info/docs/#/api">documentation</a>.</p>
<p>How can I achieve that with ui-grid?</p> | 29,247,110 | 3 | 0 | null | 2015-03-25 03:41:52.58 UTC | 2 | 2018-08-06 09:57:45.463 UTC | null | null | null | null | 138,680 | null | 1 | 17 | javascript|angularjs|angular-ui-grid | 50,372 | <p>In ui-grid, you register a callback function on the event "rowSelectionChanged"</p>
<pre><code> $scope.gridOptions.onRegisterApi = function (gridApi) {
$scope.gridApi = gridApi;
gridApi.selection.on.rowSelectionChanged($scope, callbackFunction);
gridApi.selection.on.rowSelectionChangedBatch($scope, callbackFunction);
}
}
function callbackFunction(row) {
var msg = 'row selected ' + row.isSelected; $log.log(msg);
})
</code></pre>
<p>I think you should take a look at the tutorial page in ui-grid: <a href="http://ui-grid.info/docs/#/tutorial/210_selection" rel="nofollow noreferrer">http://ui-grid.info/docs/#/tutorial/210_selection</a>. The API page sucks, in my opinion :(.</p> |
38,601,440 | What is the point of uWSGI? | <p>I'm looking at the <a href="https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface" rel="noreferrer">WSGI specification</a> and I'm trying to figure out how servers like <a href="https://uwsgi-docs.readthedocs.io/en/latest" rel="noreferrer">uWSGI</a> fit into the picture. I understand the point of the WSGI spec is to separate web servers like nginx from web applications like something you'd write using <a href="http://flask.pocoo.org" rel="noreferrer">Flask</a>. What I don't understand is what uWSGI is for. Why can't nginx directly call my Flask application? Can't flask speak WSGI directly to it? Why does uWSGI need to get in between them?</p>
<p>There are two sides in the WSGI spec: the server and the web app. Which side is uWSGI on?</p> | 38,685,758 | 5 | 0 | null | 2016-07-26 23:27:07.897 UTC | 37 | 2021-10-27 10:42:38.237 UTC | 2016-07-27 03:52:24.61 UTC | null | 1,334,007 | null | 1,334,007 | null | 1 | 125 | python|nginx|flask|wsgi|uwsgi | 42,055 | <p>Okay, I think I get this now.</p>
<blockquote>
<p>Why can't nginx directly call my Flask application?</p>
</blockquote>
<p>Because <code>nginx</code> doesn't support the WSGI spec. Technically nginx could implement the <code>WSGI</code> spec if they wanted, they just haven't.</p>
<p>That being the case, we need a web server that does implement the spec, which is what the <code>uWSGI</code> server is for.</p>
<p>Note that <code>uWSGI</code> is a full fledged http server that can and does work well on its own. I've used it in this capacity several times and it works great. If you need super high throughput for static content, then you have the option of sticking <code>nginx</code> in front of your <code>uWSGI</code> server. When you do, they will communicate over a low level protocol known as <code>uwsgi</code>.</p>
<p><em>"What the what?! Another thing called uwsgi?!"</em> you ask. Yeah, it's confusing. When you reference <code>uWSGI</code> you are talking about an http server. When you talk about <code>uwsgi</code> (all lowercase) you are talking about a <a href="http://uwsgi-docs.readthedocs.io/en/latest/Protocol.html" rel="noreferrer">binary protocol</a> that the <code>uWSGI</code> <em>server</em> uses to talk to other servers like <code>nginx</code>. They picked a bad name on this one.</p>
<p>For anyone who is interested, I wrote a <a href="http://www.ultravioletsoftware.com/single-post/2017/03/23/An-introduction-into-the-WSGI-ecosystem" rel="noreferrer">blog article</a> about it with more specifics, a bit of history, and some examples.</p> |
54,234,515 | Get by HTML element with React Testing Library? | <p>I'm using the <code>getByTestId</code> function in React Testing Library:</p>
<pre class="lang-js prettyprint-override"><code>const button = wrapper.getByTestId("button");
expect(heading.textContent).toBe("something");
</code></pre>
<p>Is it possible / advisable to search for HTML elements instead? So something like this:</p>
<pre class="lang-js prettyprint-override"><code>const button = wrapper.getByHTML("button");
const heading = wrapper.getByHTML("h1");
</code></pre> | 54,250,578 | 3 | 0 | null | 2019-01-17 11:03:33.953 UTC | 9 | 2022-07-02 18:38:37.083 UTC | 2021-12-14 17:19:24.093 UTC | null | 863,110 | null | 467,875 | null | 1 | 75 | react-testing-library | 107,113 | <p>I'm not sure what <code>wrapper</code> is in this case. But to answer your two questions: yes it's possible to get by HTML element and no, it's not advisable.</p>
<p>This is how you would do it:</p>
<pre class="lang-js prettyprint-override"><code>// Possible but not advisable
const { container } = render(<MyComponent />)
// `container` is just a DOM node
const button = container.querySelector('button')
</code></pre>
<p>Since you get back a DOM node you can use all the normal DOM APIs such as <a href="https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector" rel="noreferrer"><code>querySelector</code></a>.</p>
<p>Now, why is this not advisable. A big selling point of react-testing-library is that you test your components as a user does. This means not relying on implementation details. For instance, you don't have direct access to a component's state.</p>
<p>Writing tests this way is a bit harder but allows you to write more robust tests.</p>
<p>In your case, I would argue that the underlying HTML is an implementation detail. What happens if you change your HTML structure so that the <code>h1</code> is now an <code>h2</code> or a <code>div</code>? The test will break. If instead, you look at these elements by text the tag becomes irrelevant.</p>
<p>In some cases, the normal query helpers are not enough. For those events you can use a <code>data-testid</code> and use <code>getByTestId</code>.</p> |
40,682,918 | Kotlin - Most idiomatic way to convert a List to a MutableList | <p>I have a method (<code>getContacts</code>) that returns a <code>List<Contact></code> and I need to convert this result to a <code>MutableList<Contact></code>. Currently the best way I can think of doing it is like this:</p>
<pre><code>val contacts: MutableList<Contact> = ArrayList(presenter.getContacts())
</code></pre>
<p>Is there a more idiomatic/"less Java" way to do that?</p> | 40,683,134 | 3 | 0 | null | 2016-11-18 17:33:54.2 UTC | 3 | 2020-08-01 10:59:42.927 UTC | 2020-08-01 10:59:42.927 UTC | null | 13,363,205 | null | 3,752,244 | null | 1 | 78 | android|kotlin | 39,078 | <p>Consider using the <a href="https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/to-mutable-list.html" rel="noreferrer"><code>toMutableList()</code></a> function:</p>
<pre><code>presenter.getContacts().toMutableList()
</code></pre>
<p>There are <code>toMutableList()</code> extensions for the stdlib types that one might want to convert to a mutable list: <code>Collection<T></code>, <code>Iterable<T></code>, <code>Sequence<T></code>, <code>CharSequence</code>, <code>Array<T></code> and primitive arrays.</p> |
34,978,250 | CoordinatorLayout with RecyclerView And Collapsing header | <p>I have a layout like the following:</p>
<p><a href="https://i.stack.imgur.com/H839N.png" rel="noreferrer"><img src="https://i.stack.imgur.com/H839N.png" alt="enter image description here"></a></p>
<p>(Toolbar,
Header View, Text View, RecyclerView)</p>
<p>I need the header to be collapsed when I scrolling recyclerview's items.
So that the view "Choose item" and recyclerview left on the screen.</p>
<p>I saw examples when toolbar is being collapsed, but I need toolbar to be present always.</p>
<p>Which layouts/behavior should I use to get this work?</p> | 34,980,319 | 2 | 0 | null | 2016-01-24 16:25:29.803 UTC | 28 | 2018-12-21 11:26:25.877 UTC | 2018-12-21 11:25:51.847 UTC | null | 1,000,551 | null | 5,833,298 | null | 1 | 45 | android|android-layout|material-design|android-coordinatorlayout|androiddesignsupport | 38,235 | <p>You can achieve it by having this layout:</p>
<pre><code><android.support.design.widget.CoordinatorLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent">
<android.support.design.widget.AppBarLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<android.support.design.widget.CollapsingToolbarLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_scrollFlags="scroll|exitUntilCollapsed">
<!-- HEADER -->
<RelativeLayout
...
app:layout_collapseMode="parallax">
.....
</RelativeLayout>
<android.support.v7.widget.Toolbar
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
app:layout_collapseMode="pin" />
</android.support.design.widget.CollapsingToolbarLayout>
<!-- IF YOU WANT TO KEEP "Choose Item" always on top of the RecyclerView, put this TextView here
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="bottom"
android:text="choose item" />
-->
</android.support.design.widget.AppBarLayout>
<android.support.v7.widget.RecyclerView
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_behavior="@string/appbar_scrolling_view_behavior" />
</android.support.design.widget.CoordinatorLayout>
</code></pre>
<p>You pin your toolbar by having the <code>app:layout_collapseMode="pin"</code> property set. You make <code>RecyclerView</code> properly scrollable by setting <code>app:layout_behavior="@string/appbar_scrolling_view_behavior"</code> and that's pretty much it.</p>
<p><strong>NB!</strong> Position of "Choose item" <code>TextView</code> depends on the particular behaviour you want to achieve:</p>
<ul>
<li>you can include it as a first element of your <code>RecyclerView</code>'s <code>Adapter</code> to scroll it away, once user start scrolling through the <code>RecyclerView</code>;</li>
<li>you can add it into <code>AppBarLayout</code> so it will always stick on top of the <code>RecyclerView</code>, whenever you scroll it or not;</li>
</ul>
<p>You can read more here <a href="http://android-developers.blogspot.de/2015/05/android-design-support-library.html" rel="noreferrer">Android Design Support Library</a> and here <a href="http://antonioleiva.com/coordinator-layout/" rel="noreferrer">Design Support Library (III): Coordinator Layout</a></p>
<p>I hope, it helps!</p> |
29,588,158 | check if all elements of an array have the same value in Swift | <p>Is there a function in Swift that checks whether all elements of an array have the same value? In my case, it's an array of type <code>Int</code>. I know I can iterate over it using a simple for loop I was just wondering if there is something that is built in and quicker.</p> | 29,588,187 | 3 | 0 | null | 2015-04-12 09:56:57.657 UTC | 12 | 2019-02-19 20:12:11.567 UTC | null | null | null | null | 2,065,922 | null | 1 | 35 | arrays|swift | 29,233 | <p>Any method must iterate over all elements until a different element is found:</p>
<pre><code>func allEqualUsingLoop<T : Equatable>(array : [T]) -> Bool {
if let firstElem = array.first {
for elem in array {
if elem != firstElem {
return false
}
}
}
return true
}
</code></pre>
<p>Instead of an explicit loop you can use the <code>contains()</code> function:</p>
<pre><code>func allEqualUsingContains<T : Equatable>(array : [T]) -> Bool {
if let firstElem = array.first {
return !contains(array, { $0 != firstElem })
}
return true
}
</code></pre>
<p>If the array elements are <code>Hashable</code> (such as <code>Int</code>) then you can
create a <code>Set</code> (available since Swift 1.2) from the array elements and check if it has exactly one element.</p>
<pre><code>func allEqualUsingSet<T : Hashable>(array : [T]) -> Bool {
let uniqueElements = Set(array)
return count(uniqueElements) <= 1
}
</code></pre>
<p>A quick benchmarking test revealed that the "contains" method is much faster than the "set" method
for an array of 1,000,000 integers, in particular if the elements are
<em>not</em> all equal. This make sense because <code>contains()</code> returns as soon
as a non-matching element is found, whereas <code>Set(array)</code> always
traverses the entire array.</p>
<p>Also the "contains" methods is equally fast or slightly faster than an explicit loop.</p>
<p>Here is some simple benchmarking code. Of course the results can vary
with the array size, the number of different elements and the elements data type.</p>
<pre><code>func measureExecutionTime<T>(title: String, @noescape f : (() -> T) ) -> T {
let start = NSDate()
let result = f()
let end = NSDate()
let duration = end.timeIntervalSinceDate(start)
println("\(title) \(duration)")
return result
}
var array = [Int](count: 1_000_000, repeatedValue: 1)
array[500_000] = 2
let b1 = measureExecutionTime("using loop ") {
return allEqualUsingLoop(array)
}
let b2 = measureExecutionTime("using contains") {
allEqualUsingContains(array)
}
let b3 = measureExecutionTime("using set ") {
allEqualUsingSet(array)
}
</code></pre>
<p>Results (on a MacBook Pro, Release configuration):</p>
<pre>
using loop 0.000651001930236816
using contains 0.000567018985748291
using set 0.0344770550727844
</pre>
<p>With <code>array[1_000] = 2</code> the results are</p>
<pre>
using loop 9.00030136108398e-06
using contains 2.02655792236328e-06
using set 0.0306439995765686
</pre>
<hr>
<p><strong>Update for Swift 2/Xcode 7:</strong> Due to various changes in the Swift
syntax, the function is now written as</p>
<pre><code>func allEqual<T : Equatable>(array : [T]) -> Bool {
if let firstElem = array.first {
return !array.dropFirst().contains { $0 != firstElem }
}
return true
}
</code></pre>
<p>But you can now also define it as an extension method for arrays:</p>
<pre><code>extension Array where Element : Equatable {
func allEqual() -> Bool {
if let firstElem = first {
return !dropFirst().contains { $0 != firstElem }
}
return true
}
}
print([1, 1, 1].allEqual()) // true
print([1, 2, 1].allEqual()) // false
</code></pre> |
44,873,825 | How to get Timestamp of UTC time with Golang? | <p>I want to convert UTC time string to unix timestamp.
I do this </p>
<pre><code>fmt.Printf("%s %d\n", time.Now().String(), time.Now().Unix())
fmt.Printf("%s %s\n", time.Now().UTC().String(), time.Now().UTC().Unix())
</code></pre>
<p>But I got same unix timestamp <code>1499018765</code></p>
<blockquote>
<p>2017-07-02 20:06:05.5582802 +0200 CEST 1499018765 </p>
<p>2017-07-02 18:06:05.791337 +0000 UTC 1499018765</p>
</blockquote> | 44,873,901 | 1 | 0 | null | 2017-07-02 18:13:40.63 UTC | 4 | 2017-07-02 18:21:51.157 UTC | null | null | null | null | 710,955 | null | 1 | 39 | go|timestamp|unix-timestamp | 72,450 | <p><code>Unix()</code> always returns the number of seconds elapsed since January 1, 1970 UTC. So it does not matter whether you give it <code>time.Now()</code> or <code>time.Now().UTC()</code>, it is the same UTC time, just in different places on Earth. What you get as the result is correct.</p> |
31,777,794 | Changing the margins on Bootstrap 3 for container | <p>I have a site utilizing the bootstrap framework and I want to change the outside margins that are default for Bootstrap's '.container' class.</p>
<p>I want the margins narrower on the outsides, and I want to not have it jump to different sizes based on screen/resolution (For those who use Bootstrap, when the screen gets to a certain size .container class automatically jumps to a different set of margins.)</p>
<p>I just want a consistent margin throughout that I can set.</p> | 31,777,830 | 3 | 0 | null | 2015-08-02 23:59:13.807 UTC | 1 | 2019-09-30 01:46:26.42 UTC | null | null | null | null | 3,550,879 | null | 1 | 7 | css|twitter-bootstrap | 44,883 | <p>You can simply override the CSS. However, you should avoid modifying the Bootstrap files directly, as that limits your ability to update the library. Place your own, custom CSS after Bootstrap, and modify it however you choose.</p>
<p>Further, try using SASS or LESS and creating a variable for your margins/padding. Then you can reuse the variable for various breakpoints or custom containers, and have a single point to edit the margins/padding later.</p>
<p>Another good idea is to modify your containers with a custom class, so that the original styles are preserved. For example:</p>
<pre><code><style type="text/css">
.container.custom-container {
padding: 0 50px;
}
</style>
<div class="container">
Here's a normal container
</div>
<div class="custom-container container">
Here's a custom container
</div>
</code></pre> |
47,776,315 | Error: Statement expected, found py: Dedent | <p>We are willing/ forced to develop a small Web App for the university. Now we started and everything seems to be fine, until the above strange error raises.</p>
<blockquote>
<p>Statement expected, found py: Dedent</p>
</blockquote>
<p>The error is raised by the following lines of code:</p>
<pre class="lang-py prettyprint-override"><code>def get_reset_token(self, mysql, userid):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.execute("""SELECT token FROM tralala_reset_password
WHERE uid=(%s)""", userid)
data = cursor.fetchall()
cursor.close()
conn.close()
return data[0]
except Exception as e:
app.logger(str(e))
return ""
</code></pre>
<p>PyCharm started to mark the <code>return ""</code> statement.</p> | 47,776,698 | 7 | 0 | null | 2017-12-12 15:39:40.553 UTC | 3 | 2022-04-13 10:40:24.587 UTC | 2021-07-29 11:06:49.817 UTC | null | 10,794,031 | null | 4,810,525 | null | 1 | 18 | python|python-3.x|pycharm | 41,006 | <p>Problem solved by ignoring the error. Copied to an other editor and nothing here. So seems to be a PyCharm mistake.</p> |
46,891,622 | Run yarn in a different path | <p>Is there a way to specify a working directory for yarn? This would be different then the --modules-folder option. Specifically, I'm trying to run the yarn install command from a location outside of the package location.</p>
<p>Similar to <code>-C</code> in git</p> | 48,553,490 | 2 | 0 | null | 2017-10-23 14:17:22.247 UTC | 16 | 2020-01-16 10:25:33.627 UTC | null | null | null | null | 914,649 | null | 1 | 127 | yarnpkg | 70,283 | <p><code>--cwd</code> is what you want. </p>
<p>(tested with yarn 1.3.2)</p> |
31,375,914 | SendGrid Emails Getting Rejected as Spam | <p>I'm making a user management system for my app, and I need to send users a "forgot my password" email with a token that lets them reset their account password. I signed up for SendGrid through Azure (to get the 25,000 emails per month free, which sounded like a great deal) and wrote some code to use it, but after testing my program a bit I was dismayed to find that only a couple of my emails actually went through. </p>
<p>After going onto the SG control panel, I found that 4 out of the 6 test emails I sent went through, and all of the others were rejected as being spam. I sent an email to mail-tester.com to see what it though my spam score was and it gave me a 4.3/10.</p>
<p>The email in question was a single sentence with a link to the password reset, without any images or other elements. I only sent those 6 emails out, so the volume of my emails definitely wasn't the issue. Still, I'm very puzzled as to why my messages are getting flagged as spam.</p>
<p>Without going to the trouble of making an elaborate authentication setup, are there any basic changes I can make to my system to make it get through to users?</p> | 31,384,977 | 6 | 0 | null | 2015-07-13 05:27:36.147 UTC | 5 | 2022-02-19 18:47:03.44 UTC | null | null | null | null | 1,484,617 | null | 1 | 28 | email|azure|spam|sendgrid|email-spam | 36,087 | <p>In this case it's most likely because you are sending such a short message, with a link to 'reset your password' from a non-whitelabelled email address (the email address you're sending from cannot be verified against the actual domain), and the link may also be a different URL. It's probably getting pulled up as a potential phishing email.</p>
<p>You can rectify this by <a href="https://sendgrid.com/docs/User_Guide/Settings/Whitelabel/index.html" rel="noreferrer">white labeling your domain and email links</a> via the SendGrid dashboard, it's easy to do and should improve your deliverability.</p>
<p>Also check out <a href="https://sendgrid.com/docs/glossary/whitelabel/" rel="noreferrer">this article</a> from the SendGrid support team about White Labeling.</p> |
24,853,632 | How to load a CSV into IPython notebook | <p>I have a csv file ready to load into my python code, however, I want to load it into the following format:</p>
<pre><code>data = [[A,B,C,D],
[A,B,C,D],
[A,B,C,D],
]
</code></pre>
<p>How would I go about loading a .csv file that is readable as a numpy array? e.g., simply using previous tutorials plays havoc with using:</p>
<pre><code>data = np.array(data)
</code></pre>
<p>Failing that, I would just like to upload my csv file (e.g. 'dual-Cored.csv' as data = dual-Cored.csv)</p> | 24,854,457 | 4 | 0 | null | 2014-07-20 18:33:40.153 UTC | 3 | 2016-11-22 14:58:56.893 UTC | 2014-07-20 20:49:17.753 UTC | user3125347 | null | user3125347 | null | null | 1 | 3 | python|csv|matplotlib|ipython | 51,472 | <p>The simplest solution is just:</p>
<pre><code>import numpy as np
data = np.loadtxt("myfile.csv")
</code></pre>
<p>As long as the data is convertible into <code>float</code> and has an equal number of columns on each row, this works.</p>
<p>If the data is not convertible into <code>float</code> in some column, you may write your own converters for it. Please see the <code>numpy.loadtxt</code> documentation. It is really very flexible.</p> |
37,642,869 | RecyclerView GridLayoutManager with full width header | <p>I'm using a very helpful example <a href="http://blog.sqisland.com/2014/12/recyclerview-grid-with-header.html" rel="noreferrer">here</a> to show a RecyclerView and a GridLayoutManager to show a grid with a header.</p>
<p>It looks pretty good, but my graphic designer wants the header item to take up the full width of the RecyclerView. Right now there is padding.</p>
<p>When I set up the GridLayoutManager I add in padding (which I still want for the other grid items): [it's using Xamarin C#]</p>
<pre><code>var numColumns = myListView.MeasuredWidth / tileSizeMax;
myGridView.SetPadding(tilePadding, tilePadding, tilePadding, tilePadding);
layoutManager = new GridLayoutManager(Activity, numColumns);
myListView.SetLayoutManager(layoutManager);
</code></pre>
<p>So, how can I set the padding to be different for the header item...or make it draw itself over the padding?</p> | 38,656,526 | 2 | 0 | null | 2016-06-05 14:11:42.217 UTC | 12 | 2022-04-04 14:29:53.353 UTC | null | null | null | null | 240,795 | null | 1 | 38 | android|android-recyclerview|gridlayoutmanager | 18,242 | <p>Try with this:</p>
<pre><code>mLayoutManager = new GridLayoutManager(this, 2);
mLayoutManager.setSpanSizeLookup(new GridLayoutManager.SpanSizeLookup() {
@Override
public int getSpanSize(int position) {
switch(mAdapter.getItemViewType(position)){
case MyAdapter.HEADER:
return 2;
case MyAdapter.ITEM:
default:
return 1;
}
}
});
</code></pre>
<p>And check these links:</p>
<ul>
<li><a href="http://blog.sqisland.com/2014/12/recyclerview-grid-with-header.html" rel="noreferrer">RecyclerView: Grid with header</a></li>
<li><a href="https://stackoverflow.com/questions/26869312/set-span-for-items-in-gridlayoutmanager-using-spansizelookup">Set span for items in GridLayoutManager using SpanSizeLookup</a></li>
</ul> |
35,392,663 | Defining a Redshift connection in DataGrip | <p>I'm trying to define a Redshift connection in DataGrip but couldn't find any Redshift driver in the UI. I tried using both Postgres and generic Database Driver with no luck.</p>
<p>Has someone been able to configure this?</p> | 45,730,468 | 3 | 0 | null | 2016-02-14 14:03:11.857 UTC | 9 | 2019-11-01 16:24:54.46 UTC | 2019-11-01 16:24:54.46 UTC | null | 3,675,679 | null | 4,420,501 | null | 1 | 22 | jdbc|amazon-redshift|datagrip | 19,736 | <p>DataGrip added the native support for Amazon Redshift. So now it became a lot easier. <a href="https://i.stack.imgur.com/iSp2Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iSp2Y.png" alt="enter image description here"></a></p> |
28,530,508 | Select query in pymysql | <p>When executing the following:</p>
<pre><code>import pymysql
db = pymysql.connect(host='localhost', port=3306, user='root')
cur = db.cursor()
print(cur.execute("SELECT ParentGuardianID FROM ParentGuardianInformation WHERE UserID ='" + UserID + "'"))
</code></pre>
<p>The output is<code>1</code></p>
<p>How could I alter the code so that the actual value of the ParentGuardianID (which is '001') is printed as opposed to <code>1</code>.</p>
<p>I'm sure the answer is simple but I am a beginner so any help would be much appreciated - thanks!</p> | 28,530,643 | 2 | 0 | null | 2015-02-15 20:01:40.31 UTC | 5 | 2017-09-09 00:19:23.237 UTC | null | null | null | null | 2,856,035 | null | 1 | 16 | python|sql|pymysql | 49,924 | <p><code>cur.execute()</code> just returns the number of rows affected. You should do <code>cur.fetchone()</code> to get the actual result, or <code>cur.fetchall()</code> if you are expecting multiple rows.</p> |
25,449,093 | ggplot2: geom_polygon with no fill | <p>I hope, you dont need data for this problem, because I believe I am just making a stupid syntax error. The following code:</p>
<pre><code>ggplot()+
geom_point(data=sites, aes(x=NMDS1, y=NMDS2, shape=group), colour="grey") +
geom_point(data=species, aes(x=NMDS1, y=NMDS2, color=phyla), size=3, shape=20) + scale_colour_manual(values=Pal1) +
geom_segment(data = BiPlotscores, aes(x = 0, xend = NMDS1, y= 0, yend = NMDS2),
arrow = arrow(length = unit(0.25, "cm")), colour = "black") +
geom_text(data = BiPlotscores, aes(x = 1.1*NMDS1, y = 1.1*NMDS2, label = Parameters), size = 3) + coord_fixed()+
theme(panel.background = element_blank()) +
geom_polygon(data = hulls, aes(x=NMDS1, y=NMDS2, colour=phyla, alpha = 0.2))
</code></pre>
<p>leads to the following result:</p>
<p><img src="https://i.stack.imgur.com/wLv53.jpg" alt="enter image description here"></p>
<p>(This is not the final product :)).
I would like to have the polygons unfilled, or very just neatly filled. I do not want them to be greyish, for sure. Fill doesnt do anything, and apparently fiddling with alpha doesnt change anything, either.</p>
<p>Any ideas are superwelcome. Thank you very much!</p>
<p>"Hulls" is coming from the following code (as found here somewhere):</p>
<pre><code>#find hulls
library(plyr)
find_hull <- function(df) df[chull(df$NMDS1, df$NMDS2), ]
hulls <- ddply(species , "phyla", find_hull)
</code></pre> | 25,449,256 | 2 | 0 | null | 2014-08-22 14:07:59.13 UTC | 5 | 2021-04-14 08:27:58.057 UTC | 2014-08-25 07:46:13.643 UTC | null | 1,900,149 | nouse | 3,744,727 | null | 1 | 28 | r|ggplot2 | 50,713 | <p>If you want transparent fill, do <code>fill=NA</code> outside the <code>aes()</code>-specification.</p>
<pre><code>library(ggplot2)
data <- data.frame(y=c(2,2,1), x=c(1,2,1))
ggplot(data) + geom_polygon(aes(x=x, y=y), colour="black", fill=NA)
</code></pre> |
852,233 | Getting values of a generic IDictionary using reflection | <p>I have an instance that implements <code>IDictionary<T, K></code>, I don't know T and K at compiletime, and want to get all elements from it. I don't want to use <code>IEnumerable</code> for some reason, which would be the only non-generic interface implemented by <code>IDictionary</code>.</p>
<p>Code I have so far:</p>
<pre><code>// getting types
Type iDictType = instance.GetType().GetInterface("IDictionary`2");
Type keyType = iDictType.GetGenericArguments()[0];
Type valueType = iDictType.GetGenericArguments()[1];
// getting the keys
IEnumerable keys = (IEnumerable)dictType.GetProperty("Keys")
.GetValue(instance, null);
foreach (object key in keys)
{
// ==> this does not work: calling the [] operator
object value = dictType.GetProperty("Item")
.GetValue(instance, new object[] {key } );
// getting the value from another instance with TryGet
MethodInfo tryGetValue = iDictType.GetMethod("TryGetValue");
object[] arguments = new object[] { key, null };
bool hasElement = (bool)tryGetValue.Invoke(otherInstance, arguments);
object anotherValue = arguments[1];
}
</code></pre>
<p>I could also call TryGetValue, but I think it should be possible to call the [] operator. Can anybody help me?</p> | 852,241 | 2 | 3 | null | 2009-05-12 11:12:21.65 UTC | 10 | 2009-05-13 15:13:15.757 UTC | null | null | null | null | 2,658,202 | null | 1 | 11 | c#|reflection|generics|idictionary | 9,614 | <p>It would be better to <em>figure out</em> the <code>TKey</code> / <code>TValue</code>, and switch into regular code via <code>MakeGenericMethod</code> - like so:</p>
<p>(<strong>edit</strong> - you could pass in the <code>otherInstance</code> as an argument too, if they are of the same type)</p>
<pre><code>static class Program
{
static void Main()
{
object obj = new Dictionary<int, string> {
{ 123, "abc" }, { 456, "def" } };
foreach (Type iType in obj.GetType().GetInterfaces())
{
if (iType.IsGenericType && iType.GetGenericTypeDefinition()
== typeof(IDictionary<,>))
{
typeof(Program).GetMethod("ShowContents")
.MakeGenericMethod(iType.GetGenericArguments())
.Invoke(null, new object[] { obj });
break;
}
}
}
public static void ShowContents<TKey, TValue>(
IDictionary<TKey, TValue> data)
{
foreach (var pair in data)
{
Console.WriteLine(pair.Key + " = " + pair.Value);
}
}
}
</code></pre> |
716,021 | How can I downgrade the version of an SVN working copy? | <p>SVN directories are conveniently easy to move between computers, but this can occasion version mismatches between the working copy and installed svn client resulting in the error </p>
<pre><code>svn: This client is too old to work with working copy '.';
please get a newer Subversion client
</code></pre>
<p>In a perfect world one could install a newer version of the SVN client, but when this is not possible or convenient it would be nice to be able to downgrade the working copy to the version of the client installed--especially when you know that none of the features of the later version are being used.</p>
<p>Checking out a new working copy with the old client only works if the problematic working copy doesn't have any changes, and isn't too big to make that inconvenient.</p>
<p>The scenario to imagine would be something like: Joe sends you a large working copy with lots of nested directories (and associated versioned .svn folders). You work on it. When you try to commit it, svn tells you that your client is too old. Sending it back to Joe shouldn't be necessary. Checking out a new working copy shouldn't be necessary and merging the changes in would be inconvenient in any case.</p>
<p>Is there some way to do this?</p> | 716,030 | 2 | 0 | null | 2009-04-03 22:59:31.99 UTC | 14 | 2012-12-28 17:59:58.133 UTC | 2009-04-03 23:08:30.97 UTC | null | 85,950 | null | 85,950 | null | 1 | 36 | svn | 33,502 | <p>Short answer: it's not trivial.</p>
<p>Fortunately, the developers anticipated this problem and deal with it in an FAQ:
<a href="http://subversion.apache.org/faq.html#working-copy-format-change" rel="noreferrer">http://subversion.apache.org/faq.html#working-copy-format-change</a><br>
The upshot being to download and use their script for the purpose:
<a href="http://svn.apache.org/repos/asf/subversion/trunk/tools/client-side/change-svn-wc-format.py" rel="noreferrer">http://svn.apache.org/repos/asf/subversion/trunk/tools/client-side/change-svn-wc-format.py</a></p>
<p>Note that that script only wants the major version number of the client, so if you have client version 1.4.4 the command would be:</p>
<pre><code>python change-svn-wc-format.py <WC_PATH> 1.4 [...other options...]
</code></pre>
<p><br/>
<strong>Update:</strong></p>
<p>The above script only works for downgrading version 1.6 and below. Downgrading from 1.7+ is apparently not possible. The note from the source:</p>
<pre><code># Downgrading from format 11 (1.7) to format 10 (1.6) is not possible,
# because 11 does not use has-props and cachable-props (but 10 does).
# Naively downgrading in that situation causes properties to disappear
# from the wc.
#
# Downgrading from the 1.7 SQLite-based format to format 10 is not
# implemented.
</code></pre> |
2,595,630 | calendar.getInstance() or calendar.clone() | <p>I need to make a copy of a given date 100s of times (I cannot pass-by-reference). I am wondering which of the below two are better options</p>
<pre><code>newTime=Calendar.getInstance().setTime(originalDate);
</code></pre>
<p>OR</p>
<pre><code>newTime=originalDate.clone();
</code></pre>
<p>Performance is of main conern here.</p>
<p>thx.</p> | 2,595,654 | 5 | 4 | null | 2010-04-07 20:19:19.59 UTC | 3 | 2017-05-05 08:00:50.4 UTC | 2010-04-07 20:22:45.16 UTC | null | 218,589 | null | 127,320 | null | 1 | 37 | java|performance | 51,153 | <p>I would use </p>
<pre><code>newTime= (Calendar) originalDate.clone();
</code></pre> |
2,766,731 | What exactly do "IB" and "UB" mean? | <p>I've seen the terms "IB" and "UB" used several times, particularly in the context of C++. I've tried googling them, but apparently those two-letter combinations see a lot of use. :P</p>
<p>So, I ask you...what do they mean, when they're said as if they're a bad thing?</p> | 2,766,749 | 5 | 1 | null | 2010-05-04 15:37:28.203 UTC | 31 | 2018-11-16 15:45:40.84 UTC | 2012-11-08 21:39:50.323 UTC | null | 319,403 | null | 319,403 | null | 1 | 120 | c++|terminology|definition | 24,982 | <p><strong>IB: Implementation-defined Behaviour.</strong> The standard leaves it up to the particular compiler/platform to define the precise behaviour, but requires that it be defined.</p>
<p>Using implementation-defined behaviour can be useful, but makes your code less portable.</p>
<p><strong>UB: Undefined Behaviour.</strong> The standard does not specify how a program invoking undefined behaviour should behave. Also known as "nasal demons" because theoretically it could make demons fly out of your nose.</p>
<p>Using undefined behaviour is nearly always a bad idea. Even if it seems to work sometimes, any change to environment, compiler or platform can randomly break your code.</p> |
3,074,454 | How do I make the whole area of a list item in my navigation bar, clickable as a link? | <p>I've got a horizontal navigation bar made from an unordered list, and each list item has a lot of padding to make it look nice, but the only area that works as a link is the text itself. How can I enable the user to click anywhere in the list item to active the link?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>#nav {
background-color: #181818;
margin: 0px;
overflow: hidden;
}
#nav img {
float: left;
padding: 5px 10px;
margin-top: auto;
margin-bottom: auto;
vertical-align: bottom;
}
#nav ul {
list-style-type: none;
margin: 0px;
background-color: #181818;
float: left;
}
#nav li {
display: block;
float: left;
padding: 25px 10px;
}
#nav li:hover {
background-color: #785442;
}
#nav a {
color: white;
font-family: Helvetica, Arial, sans-serif;
text-decoration: none;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><div id="nav">
<img src="/images/renderedicon.png" alt="Icon" height="57" width="57" />
<ul>
<li><a href="#">One1</a></li>
<li><a href="#">Two</a></li>
<li><a href="#">Three</a></li>
<li><a href="#">Four</a></li>
</ul>
</div>
<div>
<h2>Heading</h2>
</div></code></pre>
</div>
</div>
</p> | 3,074,459 | 10 | 0 | null | 2010-06-19 05:22:07.437 UTC | 20 | 2020-01-14 11:08:48.027 UTC | 2017-05-23 15:56:13.403 UTC | null | 616,443 | null | 41,742 | null | 1 | 101 | html|css|anchor | 94,237 | <p>Don't put padding in the 'li' item. Instead set the anchor tag to <code>display:inline-block;</code> and apply padding to it.</p> |
3,121,979 | How to sort a list/tuple of lists/tuples by the element at a given index? | <p>I have some data either in a list of lists or a list of tuples, like this:</p>
<pre><code>data = [[1,2,3], [4,5,6], [7,8,9]]
data = [(1,2,3), (4,5,6), (7,8,9)]
</code></pre>
<p>And I want to sort by the 2nd element in the subset. Meaning, sorting by 2,5,8 where <code>2</code> is from <code>(1,2,3)</code>, <code>5</code> is from <code>(4,5,6)</code>. What is the common way to do this? Should I store tuples or lists in my list?</p> | 3,121,985 | 11 | 1 | null | 2010-06-25 23:01:41.033 UTC | 223 | 2021-12-19 00:26:04.34 UTC | 2020-04-06 13:12:16.327 UTC | null | 4,518,341 | null | 248,430 | null | 1 | 855 | python|list|sorting|tuples | 867,445 | <pre><code>sorted_by_second = sorted(data, key=lambda tup: tup[1])
</code></pre>
<p>or:</p>
<pre><code>data.sort(key=lambda tup: tup[1]) # sorts in place
</code></pre>
<p>The default sort mode is ascending. To sort in descending order use the option <a href="https://docs.python.org/3/howto/sorting.html#ascending-and-descending" rel="noreferrer"><code>reverse=True</code></a>:</p>
<pre><code>sorted_by_second = sorted(data, key=lambda tup: tup[1], reverse=True)
</code></pre>
<p>or:</p>
<pre><code>data.sort(key=lambda tup: tup[1], reverse=True) # sorts in place
</code></pre> |
2,587,305 | android: how to align image in the horizontal center of an imageview? | <p>I've tried all scaletypes, but all of them result in the image to be at the left corner of the imageview.</p>
<pre><code> <ImageView
android:id="@+id/image"
android:scaleType="centerInside"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginRight="6dip"
android:background="#0000"
android:src="@drawable/icon1" />
</code></pre> | 2,587,508 | 12 | 0 | null | 2010-04-06 18:27:14.167 UTC | 22 | 2021-12-19 12:58:13.973 UTC | null | null | null | null | 253,800 | null | 1 | 60 | android|alignment | 238,429 | <p>Your ImageView has the attribute <code>wrap_content</code>. I would think that the Image is centered inside the imageview but the imageview itself is not centered in the parentview. If you have only the imageview on the screen try <code>match_parent</code> instead of <code>wrap_content</code>. If you have more then one view in the layout you have to center the imageview.</p> |
33,681,517 | Tensorflow One Hot Encoder? | <p>Does tensorflow have something similar to scikit learn's <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html">one hot encoder</a> for processing categorical data? Would using a placeholder of tf.string behave as categorical data?</p>
<p>I realize I can manually pre-process the data before sending it to tensorflow, but having it built in is very convenient.</p> | 33,682,213 | 15 | 1 | null | 2015-11-12 21:16:01.233 UTC | 18 | 2020-04-01 17:34:40.43 UTC | 2015-11-13 18:04:33.343 UTC | null | 610,569 | null | 276,310 | null | 1 | 72 | python|machine-learning|neural-network|tensorflow | 84,383 | <p>As of TensorFlow 0.8, there is now a <a href="https://www.tensorflow.org/api_docs/python/tf/one_hot" rel="nofollow noreferrer">native one-hot op, <code>tf.one_hot</code></a> that can convert a set of sparse labels to a dense one-hot representation. This is in addition to <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits" rel="nofollow noreferrer"><code>tf.nn.sparse_softmax_cross_entropy_with_logits</code></a>, which can in some cases let you compute the cross entropy directly on the sparse labels instead of converting them to one-hot.</p>
<p><strong>Previous answer, in case you want to do it the old way:</strong>
@Salvador's answer is correct - there (used to be) no native op to do it. Instead of doing it in numpy, though, you can do it natively in tensorflow using the sparse-to-dense operators:</p>
<pre><code>num_labels = 10
# label_batch is a tensor of numeric labels to process
# 0 <= label < num_labels
sparse_labels = tf.reshape(label_batch, [-1, 1])
derived_size = tf.shape(label_batch)[0]
indices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1])
concated = tf.concat(1, [indices, sparse_labels])
outshape = tf.pack([derived_size, num_labels])
labels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0)
</code></pre>
<p>The output, labels, is a one-hot matrix of batch_size x num_labels.</p>
<p>Note also that as of 2016-02-12 (which I assume will eventually be part of a 0.7 release), TensorFlow also has the <code>tf.nn.sparse_softmax_cross_entropy_with_logits</code> op, which in some cases can let you do training without needing to convert to a one-hot encoding.</p>
<p>Edited to add: At the end, you may need to explicitly set the shape of labels. The shape inference doesn't recognize the size of the num_labels component. If you don't need a dynamic batch size with derived_size, this can be simplified.</p>
<p>Edited 2016-02-12 to change the assignment of outshape per comment below.</p> |
10,305,643 | How do I revert a file to 'last checkin' state in Mercurial? | <p>I have a hypothetical Mercurial repository on disc. I'm most of the way through creating a new feature, when I realise I've made a complete mess of the file I'm working on, and want to revert that file back to its last commit state. </p>
<p>I can use <code>hg update</code> to refresh the working copy from the repository, but that updates every file. </p>
<p>Is there a mercurial command that can update just a single file?</p> | 10,305,711 | 3 | 0 | null | 2012-04-24 20:37:55.313 UTC | 1 | 2012-04-26 20:08:38.907 UTC | 2012-04-24 21:13:03.333 UTC | null | 19,465 | null | 301,032 | null | 1 | 32 | mercurial|undo | 28,501 | <p>There is a mercurial command to revert files. <code>hg revert</code> this should revert any changes. You can also pass a file name to it e.g <code>hg revert fileName</code>.</p> |
33,157,982 | How do I disable "TODO" warnings in pylint? | <p>When running pylint on a python file it shows me warnings regarding TODO comments by default. E.g.:</p>
<blockquote>
<p>************* Module foo<br/>
W:200, 0: TODO(SE): fix this! (fixme)<br/>
W:294, 0: TODO(SE): backlog item (fixme)<br/>
W:412, 0: TODO(SE): Delete bucket? (fixme)</p>
</blockquote>
<p>While I do find this behavior useful, I would like to know of a way of temporarily and/or permanently turn these specific warnings on or off.</p>
<p>I am able to generate a pylint config file:
<code>pylint --generate-rcfile > ~/.pylintrc</code></p>
<p>I'm just note sure what to put in this file to disable warnings for TODO comments.</p> | 33,166,707 | 4 | 0 | null | 2015-10-15 20:30:37.18 UTC | 3 | 2021-12-22 06:06:26.89 UTC | null | null | null | null | 4,020,933 | null | 1 | 29 | pylint | 19,687 | <p>in the generated config file, you should see a section</p>
<pre><code> [MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
</code></pre>
<p>simply drop TODO from the "notes" list.</p>
<p>The config file is found at </p>
<pre><code>~/.pylintrc
</code></pre>
<p>If you have not generated the config file, this can be done with </p>
<pre><code>pylint --generate-rcfile > ~/.pylintrc
</code></pre> |
33,263,491 | How do I connect to my existing Git repository using Visual Studio Code? | <p>I've been using Visual Studio code for a long time, since v0.9.1. I now have run into the need to use GitHub and an online Git repository.</p>
<p>I have the online Git repository set up and have been pushing changing to the online repository using GitHub. I have recently come to realize I can save myself a step with using Visual Studio Code to do both: to edit my code, then send it up to the online repository.</p>
<p>I am very new to the whole Git concept. Visual Studio Code had me install the "Git" plugin which installed Git Bash, Git CMD, and Git GUI.</p>
<p>This is the online repository URL I'm trying to get to: <a href="https://github.com/SpectrumGraphics/Spectrum-Graphic-Designs.git" rel="noreferrer">https://github.com/SpectrumGraphics/Spectrum-Graphic-Designs.git</a></p>
<p><a href="https://i.stack.imgur.com/qS9ci.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qS9ci.png" alt="Visual Studio Code Git"></a><a href="https://i.stack.imgur.com/7ZNW3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7ZNW3.png" alt="Plugin Visual Studio had my install"></a></p> | 40,179,164 | 5 | 0 | null | 2015-10-21 15:36:11.86 UTC | 18 | 2021-02-24 03:05:04.64 UTC | 2020-05-19 19:05:30.127 UTC | null | 63,550 | null | 5,394,509 | null | 1 | 63 | git|github|visual-studio-code | 247,806 | <ol>
<li>Open Visual Studio Code <em>terminal</em> (<kbd>Ctrl</kbd> + <kbd>`</kbd>)</li>
<li><p>Write the Git clone command. For example,</p>
<pre><code>git clone https://github.com/angular/angular-phonecat.git
</code></pre></li>
<li><p>Open the folder you have just cloned (menu <em>File</em> → <em>Open Folder</em>)</p>
<p><a href="https://i.stack.imgur.com/momog.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/momog.jpg" alt="Enter image description here"></a></p></li>
</ol> |
23,056,177 | Continuous deployment & AWS autoscaling using Ansible (+Docker ?) | <p>My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.</p>
<p>We're currently using Ansible for both :</p>
<ul>
<li>system configuration (from a bare OS image)</li>
<li>frequent manually-triggered code deployments.</li>
</ul>
<p>The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.</p>
<p>We now want to implement autoscaling in EC2, and that requires some changes towards a <a href="http://www.lauradhamilton.com/servers-pets-versus-cattle" rel="noreferrer">"treat servers as cattle, not pets"</a> philosophy.</p>
<p>The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.</p>
<p>The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :</p>
<ol>
<li><strong>Bake a new fully-deployed AMI for each deploy</strong>, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly. </li>
<li><strong>Use a base AMI</strong> that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?</li>
<li><strong>Use Docker ?</strong> I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ? </li>
</ol>
<p>How do you do it ? Any insights / best practices ?
Thanks !</p> | 23,117,115 | 3 | 0 | null | 2014-04-14 09:10:38.59 UTC | 9 | 2016-08-17 09:03:08.613 UTC | null | null | null | null | 3,530,909 | null | 1 | 18 | amazon-ec2|docker|ansible|continuous-deployment|autoscaling | 5,808 | <p>This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling <code>UP</code> will happen faster.</p>
<p>Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.</p>
<p>Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.</p>
<p>My two cents.</p> |
35,654,495 | Does JavaScript have a Map literal notation? | <p>As of ES6, JavaScript has a proper Map object. I don't see a way to use a literal notation though, as you could with an Array or an Object. Am I missing it, or does it not exist?</p>
<p>Array: <code>var arr = ["Foo", "Bar"];</code></p>
<p>Object: <code>var obj = { foo: "Foo", bar: "Bar" };</code></p>
<p>Map: ???</p> | 35,654,571 | 3 | 1 | null | 2016-02-26 14:37:14.537 UTC | 5 | 2017-12-22 07:52:38.4 UTC | null | null | null | null | 2,708,274 | null | 1 | 72 | javascript|ecmascript-6 | 24,090 | <p>No, ES6 does not have a literal notation for <code>Map</code>s or <code>Set</code>s.</p>
<p>You will have to use their constructors, passing an iterable (typically an array literal):</p>
<pre><code>var map = new Map([["foo", "Foo"], ["bar", "Bar"], β¦]);
var set = new Set(["Foo", "Bar", β¦]);
</code></pre>
<p>There are some proposals to add new literal syntax to the language, but none made it into ES6 (and I'm personally not confident they will make it into any future version).</p> |
25,550,624 | Group by multiple values Underscore.JS but keep the keys and values | <p>I'm trying to group the following array with objects:</p>
<pre><code>[ { user_id: 301, alert_id: 199, deal_id: 32243 },
{ user_id: 301, alert_id: 200, deal_id: 32243 },
{ user_id: 301, alert_id: 200, deal_id: 107293 },
{ user_id: 301, alert_id: 200, deal_id: 277470 } ]
</code></pre>
<p>As you can see it contains user_id and alert_id combinations, which I like to group. So I would like to have the following array:</p>
<pre><code>[ { user_id: 301, alert_id: 199, deals: [32243] },
{ user_id: 301, alert_id: 200, deals: [32243,107293,277470]}]
</code></pre>
<p>Anyone knows a solution for this? With underscore's GroupBy I can group the values based on one key. But I need to group them, based on the combination user_id AND alert_id, as you can see.</p>
<p>I took a look at <a href="https://github.com/iros/underscore.nest">underscore.nest</a>, but the problem is it creates its own keys.</p> | 25,551,041 | 3 | 0 | null | 2014-08-28 13:55:42.817 UTC | 7 | 2019-10-18 23:24:09.287 UTC | null | null | null | null | 1,843,511 | null | 1 | 36 | javascript|underscore.js | 36,544 | <p>Use <a href="http://underscorejs.org/#groupBy"><strong>groupBy</strong></a> with a function that creates a composite key using user_id and alert_id. Then map across the groupings to get what you want:</p>
<pre><code> var list = [ { user_id: 301, alert_id: 199, deal_id: 32243 },
{ user_id: 301, alert_id: 200, deal_id: 32243 },
{ user_id: 301, alert_id: 200, deal_id: 107293 },
{ user_id: 301, alert_id: 200, deal_id: 277470 } ];
var groups = _.groupBy(list, function(value){
return value.user_id + '#' + value.alert_id;
});
var data = _.map(groups, function(group){
return {
user_id: group[0].user_id,
alert_id: group[0].alert_id,
deals: _.pluck(group, 'deal_id')
}
});
</code></pre> |
25,483,308 | Deleting all rows from Cassandra cql table | <p>Is there a command to all the rows present in a cql table in cassandra like the one in sql?</p>
<pre><code>delete from TABLE
</code></pre>
<p>Going by the documentation, I don't find any way to perform delete operation without a where condition.</p>
<pre><code>DELETE col1 FROM SomeTable WHERE userID = 'some_key_value';
</code></pre> | 25,487,398 | 1 | 0 | null | 2014-08-25 09:55:10.01 UTC | 16 | 2016-10-26 16:03:20.677 UTC | null | null | null | null | 2,519,577 | null | 1 | 99 | cassandra|cql | 93,664 | <p>To remove all rows from a CQL Table, you can use the <a href="http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/truncate_r.html">TRUNCATE</a> command:</p>
<pre><code>TRUNCATE keyspace_name.table_name;
</code></pre>
<p>Or if you are already using the keyspace that contains your target table:</p>
<pre><code>TRUNCATE table_name;
</code></pre>
<p>Important to note, but by default Cassandra creates a snapshot of the table just prior to TRUNCATE. Be sure to clean up old snapshots, or set <code>auto_snapshot: false</code> in your cassandra.yaml.</p> |
43,240,970 | How to mock http.Client Do method | <p>I'm trying to find a solution to write test and mock HTTP response.
In my function where I accept interface:</p>
<pre><code>type HttpClient interface {
Do(req *http.Request) (*http.Response, error)
}
</code></pre>
<p>I makes http get request with base auth</p>
<pre><code>func GetOverview(client HttpClient, overview *Overview) (*Overview, error) {
request, err := http.NewRequest("GET", fmt.Sprintf("%s:%s/api/overview", overview.Config.Url, overview.Config.Port), nil)
if (err != nil) {
log.Println(err)
}
request.SetBasicAuth(overview.Config.User, overview.Config.Password)
resp, err := client.Do(request)
</code></pre>
<p>How can I mock this HttpClient?
I'm looking for mock library, for instance: <a href="https://github.com/h2non/gock" rel="noreferrer">https://github.com/h2non/gock</a>
but there is only mock for Get and Post</p>
<p>Maybe I should do it in a different way.
I'll be grateful for advice</p> | 43,241,303 | 4 | 0 | null | 2017-04-05 20:24:40.403 UTC | 9 | 2019-07-22 04:49:38.003 UTC | null | null | null | null | 2,924,810 | null | 1 | 57 | unit-testing|testing|go|mocking | 65,990 | <p>Any struct with a method matching the signature you have in your interface will implement the interface. For example, you could create a struct <code>ClientMock</code></p>
<pre><code>type ClientMock struct {
}
</code></pre>
<p>with the method</p>
<pre><code>func (c *ClientMock) Do(req *http.Request) (*http.Response, error) {
return &http.Response{}, nil
}
</code></pre>
<p>You could then inject this <code>ClientMock</code> struct into your <code>GetOverview</code> func. <a href="https://play.golang.org/p/fWwW1ZMRvi" rel="noreferrer">Here</a>'s an example in the Go Playground.</p> |
8,779,074 | Having trouble calculating accurate total walking/running distance using CLLocationManager | <p>I'm trying to build an iOS app that displays the total distance travelled when running or walking. I've read and re-read all the documentation I can find, but I'm having trouble coming up with something that gives me an accurate total distance.</p>
<p>When compared with Nike+ GPS or RunKeeper, my app consistently reports a shorter distance. They'll report the same at first, but as I keep moving, the values of my app vs other running apps gradually drift.</p>
<p>For example, if I walk .3 kilometers (verified by my car's odometer), Nike+ GPS and RunKeeper both report ~.3 kilometers every time, but my app will report ~.13 kilometers. newLocation.horizontalAccuracy is consistently 5.0 or 10.0.</p>
<p>Here's the code I'm using. Am I missing something obvious? Any thoughts on how I could improve this to get a more accurate reading?</p>
<pre><code>#define kDistanceCalculationInterval 10 // the interval (seconds) at which we calculate the user's distance
#define kNumLocationHistoriesToKeep 5 // the number of locations to store in history so that we can look back at them and determine which is most accurate
#define kValidLocationHistoryDeltaInterval 3 // the maximum valid age in seconds of a location stored in the location history
#define kMinLocationsNeededToUpdateDistance 3 // the number of locations needed in history before we will even update the current distance
#define kRequiredHorizontalAccuracy 40.0f // the required accuracy in meters for a location. anything above this number will be discarded
- (id)init {
if ((self = [super init])) {
if ([CLLocationManager locationServicesEnabled]) {
self.locationManager = [[CLLocationManager alloc] init];
self.locationManager.delegate = self;
self.locationManager.desiredAccuracy = kCLLocationAccuracyBestForNavigation;
self.locationManager.distanceFilter = 5; // specified in meters
}
self.locationHistory = [NSMutableArray arrayWithCapacity:kNumLocationHistoriesToKeep];
}
return self;
}
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation {
// since the oldLocation might be from some previous use of core location, we need to make sure we're getting data from this run
if (oldLocation == nil) return;
BOOL isStaleLocation = [oldLocation.timestamp compare:self.startTimestamp] == NSOrderedAscending;
[self.delegate locationManagerDebugText:[NSString stringWithFormat:@"accuracy: %.2f", newLocation.horizontalAccuracy]];
if (!isStaleLocation && newLocation.horizontalAccuracy >= 0.0f && newLocation.horizontalAccuracy < kRequiredHorizontalAccuracy) {
[self.locationHistory addObject:newLocation];
if ([self.locationHistory count] > kNumLocationHistoriesToKeep) {
[self.locationHistory removeObjectAtIndex:0];
}
BOOL canUpdateDistance = NO;
if ([self.locationHistory count] >= kMinLocationsNeededToUpdateDistance) {
canUpdateDistance = YES;
}
if ([NSDate timeIntervalSinceReferenceDate] - self.lastDistanceCalculation > kDistanceCalculationInterval) {
self.lastDistanceCalculation = [NSDate timeIntervalSinceReferenceDate];
CLLocation *lastLocation = (self.lastRecordedLocation != nil) ? self.lastRecordedLocation : oldLocation;
CLLocation *bestLocation = nil;
CGFloat bestAccuracy = kRequiredHorizontalAccuracy;
for (CLLocation *location in self.locationHistory) {
if ([NSDate timeIntervalSinceReferenceDate] - [location.timestamp timeIntervalSinceReferenceDate] <= kValidLocationHistoryDeltaInterval) {
if (location.horizontalAccuracy < bestAccuracy && location != lastLocation) {
bestAccuracy = location.horizontalAccuracy;
bestLocation = location;
}
}
}
if (bestLocation == nil) bestLocation = newLocation;
CLLocationDistance distance = [bestLocation distanceFromLocation:lastLocation];
if (canUpdateDistance) self.totalDistance += distance;
self.lastRecordedLocation = bestLocation;
}
}
}
</code></pre> | 8,789,941 | 2 | 0 | null | 2012-01-08 16:15:19.99 UTC | 14 | 2014-07-09 20:50:37.95 UTC | 2012-01-10 02:22:26.51 UTC | null | 545,347 | null | 545,347 | null | 1 | 16 | objective-c|ios|core-location|cllocationmanager|cllocation | 7,442 | <p>As it turns out, the code I posted above works great. The problem happened to be in a different part of my app. I was accidentally converting the distance from meters to miles, instead of from meters to kilometers. Oops!</p>
<p>Anyway, hopefully my post will still have some merit, since I feel it's a pretty solid example of how to track a user's distance with Core Location.</p> |
8,901,574 | How to refresh a page in a backbone application | <p>I am using backbone to build my web app.</p>
<p>Currently I am facing an issue whereby if I am on the home page, I am unable to refresh the same page by just clicking on the 'home' button again.</p>
<p>I believe that this is the limitation provided by backbone (does not reload the page if the same URL is called)</p>
<p><img src="https://i.stack.imgur.com/jbmuC.png" alt="enter image description here"></p>
<p>Is there any way around this? So that I can trigger a page reload when I click on the home button again i.e. call the same URL again?</p> | 8,991,969 | 13 | 0 | null | 2012-01-17 20:51:21.893 UTC | 13 | 2014-11-27 01:55:20.837 UTC | null | null | null | null | 683,898 | null | 1 | 29 | javascript|backbone.js|page-refresh | 41,580 | <p>Looking at the backbone.js source, it seems as though this is not possible by default, since it only responds to a url change. And since clicking the same link, you would not trigger a change event.</p>
<p>Code in Backbone.History:</p>
<pre><code>$(window).bind('hashchange', this.checkUrl);
</code></pre>
<p>You'll need to handle a non-change yourself. Here's what I did:</p>
<pre><code>$('.nav-links').click(function(e) {
var newFragment = Backbone.history.getFragment($(this).attr('href'));
if (Backbone.history.fragment == newFragment) {
// need to null out Backbone.history.fragement because
// navigate method will ignore when it is the same as newFragment
Backbone.history.fragment = null;
Backbone.history.navigate(newFragment, true);
}
});
</code></pre> |
57,296,097 | How does backpressure work in Project Reactor? | <p>I've been working in Spring Reactor and had some previous testing that made me wonder how Fluxes handle backpressure by default. I know that onBackpressureBuffer and such exist, and I have also read that <a href="https://medium.com/@tpolansk/the-next-step-for-reactive-android-programming-34f83cb1ea46" rel="noreferrer">RxJava defaults to unbounded until you define whether to buffer, drop, etc.</a></p>
<p>So, can anyone clarify for me: What is the default backpressure behavior for a Flux in Reactor 3?</p>
<p>I tried searching for the answer but didn't find any clear answers, only definitions of Backpressure or that answer linked above for RxJava</p> | 57,298,393 | 1 | 0 | null | 2019-07-31 17:24:45.08 UTC | 15 | 2021-04-30 20:46:32.43 UTC | 2020-09-16 17:27:51.827 UTC | null | 6,051,176 | null | 11,649,193 | null | 1 | 21 | spring|rx-java|rx-java2|project-reactor|reactor | 7,422 | <h1><strong>What is back-pressure?</strong></h1>
<blockquote>
<p>Backpressure or the ability for the consumer to signal the producer
that the rate of emission is too high - <em>Reactor Reference</em></p>
</blockquote>
<p>When we are talking about backpressure we have to separate sources/publishers into two groups: the ones that respect the demand from the subscriber, and those that ignore it.</p>
<p><strong>Generally hot sources do not respect subscriber demand</strong>, since they often produce live data, like listening into a Twitter feed. In this example the subscriber doesn't have control over at what rate tweets are created, so it could easily get overwhelmed.</p>
<p>On the other hand <strong>a cold source usually generates data on demand</strong> when subscription happens, like making an HTTP request and then processing the response. In this case the HTTP server you are calling will only send a response after you sent your request.</p>
<p>Important to note that this is not a rule: not every hot source ignores the demand and not every cold source respects it. You can read more on hot and cold sources <a href="https://projectreactor.io/docs/core/release/reference/#reactor.hotCold" rel="noreferrer">here</a>.</p>
<p><strong>Let's look at some examples that might help in understanding.</strong></p>
<h1>Publisher that respects the demand</h1>
<p>Given a Flux that produces numbers from 1 to <code>Integer.MAX_VALUE</code> and given a processing step that takes 100ms to process a single element:</p>
<pre class="lang-java prettyprint-override"><code>Flux.range(1, Integer.MAX_VALUE)
.log()
.concatMap(x -> Mono.delay(Duration.ofMillis(100)), 1) // simulate that processing takes time
.blockLast();
</code></pre>
<p>Let's see the logs:</p>
<pre><code>[ INFO] (main) | onSubscribe([Synchronous Fuseable] FluxRange.RangeSubscription)
[ INFO] (main) | request(1)
[ INFO] (main) | onNext(1)
[ INFO] (main) | request(1)
[ INFO] (main) | onNext(2)
[ INFO] (parallel-1) | request(1)
[ INFO] (parallel-1) | onNext(3)
[ INFO] (parallel-2) | request(1)
[ INFO] (parallel-2) | onNext(4)
[ INFO] (parallel-3) | request(1)
[ INFO] (parallel-3) | onNext(5)
</code></pre>
<p>We can see that before every onNext there is a request. The request signal is sent by <code>concatMap</code> operator. It is signaled when <code>concatMap</code> finished the current element and ready to accept the next one. The source only sends the next item when it receives a request from the downstream.</p>
<p>In this example backpressure is automatic, we don't need to define any strategy because the operator knows what it can handle and the source respects it.</p>
<h1>Publisher that ignores the demand and no backpressure strategy is defined</h1>
<p>For the sake of simplicity I selected an easy to understand cold publisher for this example. It's <a href="https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#interval-java.time.Duration-" rel="noreferrer">Flux.interval</a> which emits one item per the specified time interval. It makes sense that this cold publisher does not respect demand since it would be quite strange to see items emitted by different, longer intervals than the one originally specified.</p>
<p>Let's see the code:</p>
<pre class="lang-java prettyprint-override"><code>Flux.interval(Duration.ofMillis(1))
.log()
.concatMap(x -> Mono.delay(Duration.ofMillis(100)))
.blockLast();
</code></pre>
<p>Source emits one item every millisecond. Subscriber is able to process one item every 100 milliseconds. It's clear that the subscriber is not able to keep up with the producer and we get an exception something like this quite soon:</p>
<pre><code>reactor.core.Exceptions$OverflowException: Could not emit tick 32 due to lack of requests (interval doesn't support small downstream requests that replenish slower than the ticks)
at reactor.core.Exceptions.failWithOverflow(Exceptions.java:215)
...
</code></pre>
<p>What can we do to avoid this exception?</p>
<h1>Publisher that ignores the demand and backpressure strategy is defined</h1>
<p>The default backpressure strategy is the one we have seen above: terminating with error. Reactor does not enforce any error handling strategy on us. When we see this kind of error we can decide which one is the most applicable for our use case.</p>
<p>You can find a couple of them in <a href="https://projectreactor.io/docs/core/release/reference/#which.errors" rel="noreferrer">Reactor reference</a>.</p>
<p>For this example we will use the simplest one: <code>onBackpressureDrop</code>.</p>
<pre class="lang-java prettyprint-override"><code>Flux.interval(Duration.ofMillis(1))
.onBackpressureDrop()
.concatMap(a -> Mono.delay(Duration.ofMillis(100)).thenReturn(a))
.doOnNext(a -> System.out.println("Element kept by consumer: " + a))
.blockLast();
</code></pre>
<p>Output:</p>
<pre><code>Element kept by consumer: 0
Element kept by consumer: 1
Element kept by consumer: 2
Element kept by consumer: 3
Element kept by consumer: 4
Element kept by consumer: 5
Element kept by consumer: 6
Element kept by consumer: 7
Element kept by consumer: 8
Element kept by consumer: 9
Element kept by consumer: 10
Element kept by consumer: 11
Element kept by consumer: 12
Element kept by consumer: 13
Element kept by consumer: 14
Element kept by consumer: 15
Element kept by consumer: 16
Element kept by consumer: 17
Element kept by consumer: 18
Element kept by consumer: 19
Element kept by consumer: 20
Element kept by consumer: 21
Element kept by consumer: 22
Element kept by consumer: 23
Element kept by consumer: 24
Element kept by consumer: 25
Element kept by consumer: 26
Element kept by consumer: 27
Element kept by consumer: 28
Element kept by consumer: 29
Element kept by consumer: 30
Element kept by consumer: 31
Element kept by consumer: 2399
Element kept by consumer: 2400
Element kept by consumer: 2401
Element kept by consumer: 2402
Element kept by consumer: 2403
Element kept by consumer: 2404
Element kept by consumer: 2405
Element kept by consumer: 2406
Element kept by consumer: 2407
</code></pre>
<p>We can see that after the first 32 items there is a quite big skip to 2400. The elements between are dropped due to the defined strategy.</p>
<h1>Key takeaways</h1>
<ul>
<li>Back pressure is often automatic and we don't need to do anything since we get data on demand.</li>
<li>In case of sources which do not respect subscriber demand we need to define a strategy to avoid terminating error.</li>
</ul>
<p><strong>UPDATE:</strong>
Useful read: <a href="https://projectreactor.io/docs/core/release/reference/#_on_backpressure_and_ways_to_reshape_requests" rel="noreferrer">How to control request rate</a></p> |
48,387,180 | Is it possible to capitalize first letter of text/string in react native? How to do it? | <p>I have to capitalize first letter of text that i want to display. I searched for it but i cant found clear thing to do that, also there is no such props for <code>text</code> in react native official documentation.</p>
<p>I am showing my text with following format:</p>
<pre><code><Text style={styles.title}>{item.item.title}</Text>
</code></pre>
<p>or</p>
<pre><code><Text style={styles.title}>{this.state.title}</Text>
</code></pre>
<p>How can I do it?</p>
<p>Suggestions are welcome? </p> | 48,388,363 | 11 | 0 | null | 2018-01-22 17:32:52.447 UTC | 12 | 2022-05-28 05:15:32.413 UTC | 2018-01-22 17:40:38.63 UTC | null | 7,248,114 | null | 7,248,114 | null | 1 | 60 | text|react-native|react-native-text | 101,289 | <p>Write a function like this</p>
<pre><code>Capitalize(str){
return str.charAt(0).toUpperCase() + str.slice(1);
}
</code></pre>
<p>then call it from <code><Text></code> tag By passing text as parameter</p>
<pre><code><Text>{this.Capitalize(this.state.title)} </Text>
</code></pre> |
60,245,159 | How can I build a Swift Package for iOS over command line? | <p>In Xcode, I can select my destination as a "generic iOS device" or any iOS simulator, and my package will build platform-specific code for ios.</p>
<p>Via command line "swift build" <strong>only</strong> builds my target for macOS.</p>
<p>I want to build the target for iOS for CI purposes. The problem with building for macOS is that UIKit-specific code won't be built.</p>
<p>For example:</p>
<pre><code>#if canImport(UIKit)
// some invalid code
#endif
</code></pre>
<p>The invalid code will not be noticed and will pass the build phase.</p>
<p>Ideally, I could say something like <code>swift build -platform iOS</code>. Is there a way to do something like this?</p> | 60,246,359 | 2 | 0 | null | 2020-02-16 03:31:07.2 UTC | 9 | 2021-10-24 17:38:09.063 UTC | null | null | null | null | 4,984,725 | null | 1 | 18 | swift|xcode|uikit|xcodebuild|swift-package-manager | 5,605 | <p>At time of writing (Feb 16, 2019), a working solution is:</p>
<pre><code>swift build -v \
-Xswiftc "-sdk" \
-Xswiftc "`xcrun --sdk iphonesimulator --show-sdk-path`" \
-Xswiftc "-target" \
-Xswiftc "x86_64-apple-ios13.0-simulator"
</code></pre>
<p>This command uses <code>-Xswiftc</code> to workaround the issue by overriding the sdk from macOS to iphonesimulator.</p>
<blockquote>
<p>Strictly we add these flags so developers can work around issues, but they also should report a bug so that we can provide a proper solution for their needs.</p>
</blockquote>
<p><a href="https://github.com/apple/swift-package-manager/commit/d04ca889fc0ddafbec2c63511a3cd9570a43f5f7" rel="noreferrer">Source</a></p>
<p>So I'm guessing there will be a more elegant solution in the future.</p> |
53,828,891 | dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.62.dylib error running php after installing node with brew on Mac | <p>I installed node using homebrew (Mojave), afterwards php stoped working and if I try to run <code>php -v</code> I get this error:</p>
<pre><code>php -v
dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.62.dylib
Referenced from: /usr/local/bin/php
Reason: image not found
</code></pre>
<p>I tried to uninstall both node and icu4c but the problem persists</p> | 54,873,233 | 36 | 7 | null | 2018-12-18 08:20:21.62 UTC | 81 | 2022-09-13 17:01:17.35 UTC | 2019-07-14 12:33:37.14 UTC | null | 1,245,190 | null | 2,882,899 | null | 1 | 546 | php|node.js|macos|homebrew | 291,361 | <blockquote>
<p>Update - As stated in some of the comments, running <code>brew cleanup</code> could possibly fix this error, if that alone doesn't fix it, you might try upgrading individual packages or all your brew packages.</p>
</blockquote>
<p>I just had this same problem. Upgrading Homebrew and then cleaning up worked for me. This error likely showed up for me because of a mismatch in package versions. None of the above solutions resolved my error, but running the following homebrew commands did.</p>
<blockquote>
<p><strong>Caution</strong> - This will upgrade all your brew packages, including, but not limited to PHP. If you only want to upgrade specific packages make sure to be specific.</p>
</blockquote>
<pre><code>brew upgrade icu4c
brew upgrade // or upgrade all packages
</code></pre>
<p>and finally</p>
<pre><code>brew cleanup
</code></pre> |
26,742,054 | The client and server cannot communicate, because they do not possess a common algorithm - ASP.NET C# IIS TLS 1.0 / 1.1 / 1.2 - Win32Exception | <p>I have an issue with a C# PayTrace Gateway. The below code was working fine until yesterday when I believe they turned off SSL3 due to the Poodle Exploit. When running the code below we got the following message. The remote server has forcefully closed the connection. After doing some research on the problem we determined that because our IIS Server 7.5 was configured to still use SSL3, C# defaulted to SSL3, which PayTrace would forcibly close the connection. We then removed SSL3 from the server. Which then lead to the following error: </p>
<p><strong>The client and server cannot communicate, because they do not possess a common algorithm.</strong> </p>
<p>My guess is that there are additional SSL algorithm we need to install on the server now that SSL 3 is removed. Our IT staff claims that TLS 1.1 and TLS 1.2 are working and that ASP.NET should be now defaulting to those. But I feel like there still must be something else we need to install on the server, I have no knowledge of SSL Algorithms so I have no idea where to begin. </p>
<pre><code>var postUrl = new StringBuilder();
//Initialize url with configuration and parameter values...
postUrl.AppendFormat("UN~{0}|", this.MerchantLoginID);
postUrl.AppendFormat("PSWD~{0}|", this.MerchantTransactionKey);
postUrl.Append("TERMS~Y|METHOD~ProcessTranx|TRANXTYPE~Sale|");
postUrl.AppendFormat("CC~{0}|", cardNumber);
postUrl.AppendFormat("EXPMNTH~{0}|", expirationMonth.PadLeft(2, '0'));
postUrl.AppendFormat("EXPYR~{0}|", expirationYear);
postUrl.AppendFormat("AMOUNT~{0}|", transactionAmount);
postUrl.AppendFormat("BADDRESS~{0}|", this.AddressLine1);
postUrl.AppendFormat("BADDRESS2~{0}|", this.AddressLine2);
postUrl.AppendFormat("BCITY~{0}|", this.City);
postUrl.AppendFormat("BSTATE~{0}|", this.State);
postUrl.AppendFormat("BZIP~{0}|", this.Zip);
postUrl.AppendFormat("SADDRESS~{0}|", this.AddressLine1);
postUrl.AppendFormat("SADDRESS2~{0}|", this.AddressLine2);
postUrl.AppendFormat("SCITY~{0}|", this.City);
postUrl.AppendFormat("SSTATE~{0}|", this.State);
postUrl.AppendFormat("SZIP~{0}|", this.Zip);
if (!String.IsNullOrEmpty(this.Country))
{
postUrl.AppendFormat("BCOUNTRY~{0}|", this.Country);
}
if (!String.IsNullOrEmpty(this.Description))
{
postUrl.AppendFormat("DESCRIPTION~{0}|", this.Description);
}
if (!String.IsNullOrEmpty(this.InvoiceNumber))
{
postUrl.AppendFormat("INVOICE~{0}|", this.InvoiceNumber);
}
if (this.IsTestMode)
{
postUrl.AppendFormat("TEST~Y|");
}
//postUrl.Append();
WebClient wClient = new WebClient();
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls;
String sRequest = "PARMLIST=" + Url.Encode(postUrl.ToString());
wClient.Headers.Add("Content-Type", "application/x-www-form-urlencoded");
string sResponse = "";
sResponse = wClient.UploadString(PayTraceUrl, sRequest);
</code></pre>
<p>Also, just an FYI, this issue is also happening when we connect to First Data E4 gateway so it's not just a PayTrace thing. My guess is that as more gateways turn off access to SSL3 we'll continue to run into issues with other gateways until this can be resolved on the server. Also, I did find a few suggestions online, some suggested placing the following code right before making the outbound request: </p>
<pre><code>ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls;
</code></pre>
<p>Unfortunately that did not work either, same error. Which is why I'm thinking something additional needs to be installed on the IIS7.5 server. I'm just not sure what. </p> | 42,124,951 | 13 | 0 | null | 2014-11-04 18:10:38.24 UTC | 9 | 2022-09-14 17:43:07.907 UTC | 2020-09-23 20:27:06.46 UTC | null | 584,756 | null | 584,756 | null | 1 | 82 | c#|asp.net|ssl|iis-7.5|poodle-attack | 274,126 | <p>There are several other posts about this now and they all point to enabling TLS 1.2. Anything less is unsafe.</p>
<p>You can do this in .NET 3.5 with a patch.<br />
You can do this in .NET 4.0 and 4.5 with a single line of code</p>
<pre><Code>ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12; // .NET 4.5
ServicePointManager.SecurityProtocol = (SecurityProtocolType)3072; // .NET 4.0
</code></pre>
<p>In .NET 4.6, it automatically uses TLS 1.2.</p>
<p>See here for more details:
<a href="https://web.archive.org/web/20180309054003/https://blogs.perficient.com/microsoft/2016/04/tsl-1-2-and-net-support/" rel="noreferrer">.NET support for TLS</a>.</p> |
379,015 | UDP broadcast packets across subnets | <p>Is it possible to send a UDP broadcast packet to a different subnet through a router? I'm writing an app to discover certain devices on the network, and the PC might be on a different subnet than the devices it's looking for.</p> | 379,043 | 3 | 0 | null | 2008-12-18 19:53:36.843 UTC | 2 | 2008-12-18 20:16:21.917 UTC | null | null | null | Jon B | 27,414 | null | 1 | 22 | networking|udp | 43,972 | <p>Yes, and no.</p>
<p>It's actually do-able, so long as the intervening routers don't have <code>no ip directed-broadcasts</code> or similar configured. However these days that's the default because allowing normal broadcasts to traverse routers is a DoS problem.</p>
<p>If you really want to broadcast across subnets then you should be using <a href="http://en.wikipedia.org/wiki/IP_Multicast" rel="noreferrer">IP Multicast</a> instead. That still requires that the intervening routers are configured appropriately, but it is the "right" way to do it.</p> |
39,640,160 | What is "track by" in AngularJS and how does it work? | <p>I don't really understand how <code>track by</code> works and what it does.<br>
My main goal is to use it with <code>ng-repeat</code> to add some precision.</p> | 39,640,555 | 4 | 0 | null | 2016-09-22 13:16:06.983 UTC | 2 | 2020-07-16 08:35:48.367 UTC | 2016-09-22 13:50:26.753 UTC | null | 4,174,897 | null | 6,804,762 | null | 1 | 24 | javascript|angularjs|ng-repeat|angularjs-track-by | 42,161 | <h3>Using <code>track by</code> to track strings & duplicate values</h3>
<p>Normally <code>ng-repeat</code> tracks each item by the item itself. For the given array <code>objs = [ 'one', 'one', 2, 'five', 'string', 'foo']</code>, <code>ng-repeat</code> attempts to track changes by each <code>obj</code> in the <code>ng-repeat="obj in objs"</code>. The problem is that we have duplicate values and angular will throw an error. One way to solve that is to have angular track the objects by other means. For strings, <code>track by $index</code> is a good solution as you really haven't other means to track a string.</p>
<h3><code>track by</code> & triggering a digest & input focuses</h3>
<p>You allude to the fact you're somewhat new to angular. A digest cycle occurs when angular performs an exhaustive check of each watched property in order to reflect any change to the correspodant view; often during a digest cycle it happens that your code modify other watched properties so the procedure needs to be performed again until angular detects no more changes.</p>
<p>For example: You click a button to update a model via <code>ng-click</code>, then you do somethings (i mean, the things you wrote in the callback to perform when an user makes a click), then angular trigger digest cycle in order to refresh the view. I'm not too articulate in explaining that so you should investigate further if that didn't clarify things.</p>
<p>So back to <code>track by</code>. Let's use an example:</p>
<ol>
<li>call a service to return an array of objects </li>
<li>update an object within the array and save object</li>
<li>after save service, depending on what the API returns, you may:
<ol>
<li>replace the whole object OR</li>
<li>update a value on the existing object</li>
</ol></li>
<li>reflect change in <code>ng-repeat</code> UI</li>
</ol>
<p>How you track this object will determine how the UI reflects the change. </p>
<p>One of the most annoying UXs I've experienced is this. Say you have a table of objects, each cell has an input where you want to in-line edit those objects' properties. I want to change the value, then <code>on-blur</code>, save that object while moving to the next cell to edit while you might be waiting on the response. So this is an autosave type thing. Depending on how you setup your <code>track by</code> statement, you may lose current focus (e.g. the field you're currently editing) when the response gets written back into your array of objects.</p> |
22,315,672 | How to configure Spring MVC with pure Java-based configuration? | <p>I have, what I would consider a pretty simple Spring MVC setup. My applicationContext.xml is this:</p>
<pre><code><mvc:annotation-driven />
<mvc:resources mapping="/css/**" location="/css/" />
<context:property-placeholder location="classpath:controller-test.properties" />
<bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"
p:prefix="/WEB-INF/views/" p:suffix=".jsp" />
</code></pre>
<p>My web.xml is currently this:</p>
<pre><code> <servlet>
<servlet-name>springDispatcherServlet</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>classpath:applicationContext.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<!-- Map all requests to the DispatcherServlet for handling -->
<servlet-mapping>
<servlet-name>springDispatcherServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
</code></pre>
<p>I am trying to convert this set up to pure Java-based config. I've searched the web and so far, I've come up with stuff that explains (some what) how to do the Java config but doesn't explain how to register that Java config with the environment, i.e., the web context.</p>
<p>What I have so far in terms of @Configuration is this:</p>
<pre><code> @Configuration
@EnableWebMvc
@PropertySource("classpath:controller.properties")
@ComponentScan("com.project.web")
public class WebSpringConfig extends WebMvcConfigurerAdapter {
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("/css/**").addResourceLocations("/css/");
}
@Bean
public ViewResolver configureViewResolver() {
InternalResourceViewResolver viewResolve = new InternalResourceViewResolver();
viewResolve.setPrefix("/WEB-INF/views/");
viewResolve.setSuffix(".jsp");
return viewResolve;
}
@Override
public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer){
configurer.enable();
}
}
</code></pre>
<p>How do I register this with the web container? I am using the latest spring (4.02).</p>
<p>Thanks!</p> | 22,316,150 | 2 | 0 | null | 2014-03-11 03:08:00.493 UTC | 21 | 2019-10-04 19:05:43.827 UTC | null | null | null | null | 1,902,183 | null | 1 | 36 | java|spring|spring-mvc | 59,874 | <p>You need to make following changes to <code>web.xml</code> in order to support java based configuration. This will tell the the <code>DispatcherServlet</code> to load configuration using the annotation based java configuration <code>AnnotationConfigWebApplicationContext</code>. You only need to pass the location of your java config file to the <code>contextConfigLocation</code> param, as below</p>
<pre><code><servlet>
<servlet-name>springDispatcherServlet</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<init-param>
<param-name>contextClass</param-name>
<param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext</param-value>
</init-param>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>/*path to your WebSpringConfig*/ </param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
</code></pre>
<p><strong>Update: Doing the same without making changes to web.xml</strong></p>
<p>You can even do this without <code>web.xml</code> as Servlet Specification 3.0 makes the <code>web.xml</code> optional. You only need to implement/configure <code>WebApplicationInitializer</code> interface to configure the <code>ServletContext</code> which will allow you to create, configure, and perform registration of <code>DispatcherServlet</code> programmatically. The good thing is that <code>WebApplicationInitializer</code> is detected automatically.</p>
<p>In summary, one needs to implement <code>WebApplicationInitializer</code> to get rid of <code>web.xml</code>.</p>
<pre><code> public class MyWebAppInitializer implements WebApplicationInitializer {
@Override
public void onStartup(ServletContext container) {
// Create the 'root' Spring application context
AnnotationConfigWebApplicationContext rootContext =
new AnnotationConfigWebApplicationContext();
rootContext.register(WebSpringConfig.class);
// Manage the lifecycle of the root application context
container.addListener(new ContextLoaderListener(rootContext));
// Create the dispatcher servlet's Spring application context
AnnotationConfigWebApplicationContext dispatcherContext =
new AnnotationConfigWebApplicationContext();
dispatcherContext.register(DispatcherConfig.class);
// Register and map the dispatcher servlet
ServletRegistration.Dynamic dispatcher =
container.addServlet("dispatcher", new DispatcherServlet(dispatcherContext));
dispatcher.setLoadOnStartup(1);
dispatcher.addMapping("/");
}
}
</code></pre>
<p><strong>Update</strong>: from comments<br>
A slightly more convoluted explanation is also included in the official Spring reference <a href="http://docs.spring.io/spring/docs/4.0.5.RELEASE/spring-framework-reference/htmlsingle/#beans-java-instantiating-container-web" rel="nofollow noreferrer">Spring 4 Release</a></p>
<p>Reference:</p>
<p><a href="http://docs.spring.io/spring/docs/3.1.x/javadoc-api/org/springframework/web/WebApplicationInitializer.html" rel="nofollow noreferrer">http://docs.spring.io/spring/docs/3.1.x/javadoc-api/org/springframework/web/WebApplicationInitializer.html</a></p> |
41,862,359 | Count number of values in R | <p>I have below dataset: </p>
<pre><code> ClaimType ClaimDay ClaimCost dates month day
1 1 1 10811 1970-01-01 1 1970-01-01
2 1 1 18078 1970-01-01 1 1970-01-01
3 1 2 44579 1970-01-01 1 1970-01-02
4 1 3 23710 1970-01-01 1 1970-01-03
5 1 4 29580 1970-01-01 1 1970-01-04
6 1 4 36208 1970-01-01 1 1970-01-04
</code></pre>
<p>I would like to create a new dataset with the columns "claim day" and "day". Claim day should be counted per value. So for example since we have two ones, a two, a three and then two fours I would like the new dataset to be like: </p>
<pre><code>ClaimDay day
2 1970-01-01
1 1970-01-02
1 1970-01-03
2 1970-01-04
</code></pre>
<p>As you can see the Claimday and day are related. </p>
<p>I have tried </p>
<pre><code>mydata <- aggregate(ClaimDay~Day,FUN=sum,data=mydata)$ClaimDay
</code></pre>
<p>But the problem is, then it counts the summary when aggregate. </p>
<p><strong>Can anyone help me with my issue?</strong></p> | 41,862,571 | 2 | 1 | null | 2017-01-25 21:54:55.773 UTC | 4 | 2018-08-28 20:28:40.853 UTC | 2018-08-28 20:28:40.853 UTC | null | 4,706,171 | null | 7,247,472 | null | 1 | 5 | r|dataframe|aggregate | 44,981 | <p>If you don't mind a <code>dplyr</code> solution, this works on your example data</p>
<pre><code>library(dplyr)
df %>% select(ClaimDay, day) %>%
group_by(day) %>%
mutate(ClaimDay.count = n()) %>%
slice(1)
</code></pre> |
24,130,745 | Convert generator object to list for debugging | <p>When I'm debugging in Python using IPython, I sometimes hit a break-point and I want to examine a variable that is currently a generator. The simplest way I can think of doing this is converting it to a list, but I'm not clear on what's an easy way of doing this in one line in <code>ipdb</code>, since I'm so new to Python.</p> | 24,130,771 | 1 | 0 | null | 2014-06-09 23:41:38.693 UTC | 25 | 2017-02-24 15:24:47.947 UTC | 2017-02-24 15:24:47.947 UTC | null | 1,079,075 | null | 1,079,075 | null | 1 | 151 | python|generator|ipdb | 214,008 | <p>Simply call <code>list</code> on the generator.</p>
<pre><code>lst = list(gen)
lst
</code></pre>
<p>Be aware that this affects the generator which will not return any further items.</p>
<p>You also cannot directly call <code>list</code> in IPython, as it conflicts with a command for listing lines of code.</p>
<p>Tested on this file:</p>
<pre><code>def gen():
yield 1
yield 2
yield 3
yield 4
yield 5
import ipdb
ipdb.set_trace()
g1 = gen()
text = "aha" + "bebe"
mylst = range(10, 20)
</code></pre>
<p>which when run:</p>
<pre><code>$ python code.py
> /home/javl/sandbox/so/debug/code.py(10)<module>()
9
---> 10 g1 = gen()
11
ipdb> n
> /home/javl/sandbox/so/debug/code.py(12)<module>()
11
---> 12 text = "aha" + "bebe"
13
ipdb> lst = list(g1)
ipdb> lst
[1, 2, 3, 4, 5]
ipdb> q
Exiting Debugger.
</code></pre>
<h1>General method for escaping function/variable/debugger name conflicts</h1>
<p>There are debugger commands <code>p</code> and <code>pp</code> that will <code>print</code> and <code>prettyprint</code> any expression following them.</p>
<p>So you could use it as follows:</p>
<pre><code>$ python code.py
> /home/javl/sandbox/so/debug/code.py(10)<module>()
9
---> 10 g1 = gen()
11
ipdb> n
> /home/javl/sandbox/so/debug/code.py(12)<module>()
11
---> 12 text = "aha" + "bebe"
13
ipdb> p list(g1)
[1, 2, 3, 4, 5]
ipdb> c
</code></pre>
<p>There is also an <code>exec</code> command, called by prefixing your expression with <code>!</code>, which forces debugger to take your expression as Python one.</p>
<pre><code>ipdb> !list(g1)
[]
</code></pre>
<p>For more details see <code>help p</code>, <code>help pp</code> and <code>help exec</code> when in debugger.</p>
<pre><code>ipdb> help exec
(!) statement
Execute the (one-line) statement in the context of
the current stack frame.
The exclamation point can be omitted unless the first word
of the statement resembles a debugger command.
To assign to a global variable you must always prefix the
command with a 'global' command, e.g.:
(Pdb) global list_options; list_options = ['-l']
</code></pre> |
5,626,478 | Using Tasks with conditional continuations | <p>I'm a little confused about how to use Tasks with conditional Continuations.</p>
<p>If I have a task, and then I want to continue with a tasks that handle success and error, and then wait on those to complete. </p>
<pre><code>void FunctionThrows() {throw new Exception("faulted");}
static void MyTest()
{
var taskThrows = Task.Factory.StartNew(() => FunctionThrows());
var onSuccess = taskThrows.ContinueWith(
prev => Console.WriteLine("success"),
TaskContinuationOptions.OnlyOnRanToCompleted);
var onError = taskThrows.ContinueWith(
prev => Console.WriteLine(prev.Exception),
TaskContinuationOptions.OnlyOnFaulted);
//so far, so good
//this throws because onSuccess was cancelled before it was started
Task.WaitAll(onSuccess, onError);
}
</code></pre>
<p>Is this the preferred way of doing task success/failure branching? Also, how am I supposed to join all these tasks, suppose I've created a long line of continuations, each having their own error handling.</p>
<pre><code> //for example
var task1 = Task.Factory.StartNew(() => ...)
var task1Error = task1.ContinueWith( //on faulted
var task2 = task1.ContinueWith( //on success
var task2Error = task2.ContinueWith( //on faulted
var task3 = task2.ContinueWith( //on success
//etc
</code></pre>
<p>Calling <code>WaitAll</code> on these invariably throws, because some of the continuations will be cancelled due to the <code>TaskContinuationOptions</code>, and calling <code>Wait</code> on a cancelled task throws.
How do I join these without getting the "A task was cancelled" exception"?</p> | 5,885,676 | 3 | 0 | null | 2011-04-11 19:30:30.467 UTC | 5 | 2012-11-29 13:33:04.297 UTC | 2012-03-13 22:01:22.853 UTC | null | 211,563 | null | 197,605 | null | 1 | 31 | c#|.net-4.0|task-parallel-library | 17,558 | <p>I think your main problem is that you're telling those two tasks to "Wait" with your call to </p>
<pre><code>Task.WaitAll(onSuccess, onError);
</code></pre>
<p>The <strong>onSuccess</strong> and <strong>onError</strong> continuations are automatically setup for you and will be executed <em>after</em> their antecedent task completes.</p>
<p>If you simply replace your <code>Task.WaitAll(...)</code> with <code>taskThrows.Start();</code> I believe you will get the desired output.</p>
<p>Here is a bit of an example I put together:</p>
<pre><code>class Program
{
static int DivideBy(int divisor)
{
Thread.Sleep(2000);
return 10 / divisor;
}
static void Main(string[] args)
{
const int value = 0;
var exceptionTask = new Task<int>(() => DivideBy(value));
exceptionTask.ContinueWith(result => Console.WriteLine("Faulted ..."), TaskContinuationOptions.OnlyOnFaulted | TaskContinuationOptions.AttachedToParent);
exceptionTask.ContinueWith(result => Console.WriteLine("Success ..."), TaskContinuationOptions.OnlyOnRanToCompletion | TaskContinuationOptions.AttachedToParent);
exceptionTask.Start();
try
{
exceptionTask.Wait();
}
catch (AggregateException ex)
{
Console.WriteLine("Exception: {0}", ex.InnerException.Message);
}
Console.WriteLine("Press <Enter> to continue ...");
Console.ReadLine();
}
}
</code></pre> |
6,272,945 | How can I get a message bundle string from inside a managed bean? | <p>I would like to be able to retrieve a string from a message bundle from inside a JSF 2 managed bean. This would be done in situations where the string is used as the summary or details parameter in a <code>FacesMessage</code> or as the message in a thrown exception.</p>
<p>I want to make sure that the managed bean loads the correct message bundle for the user's locale. It is not clear to me how to do this from a managed bean using JSF API calls.</p>
<p>My configuration is:</p>
<ul>
<li>Using Tomcat 7 as the container so the solution cannot depend on API calls that only work in a full application server container</li>
<li>Using the JSF 2 reference implementation (Mojarra)</li>
<li>NOT using any libraries that allow CDI</li>
</ul>
<p><strong>NOTE:</strong> I did see <a href="https://stackoverflow.com/questions/3478073/jsf-2-localization-managed-bean">this similar question</a>, but it depends on features that are unavailable in my configuration</p>
<p><strong>EDIT:</strong> I made a mistake in my original question. What I meant to ask was "How can I get a <strong>resource</strong> bundle string from inside a managed bean"? BalusC gave me the correct answer for what I asked. The solution for what I actually meant to ask is very similar:</p>
<pre><code>public static String getResourceBundleString(
String resourceBundleName,
String resourceBundleKey)
throws MissingResourceException {
FacesContext facesContext = FacesContext.getCurrentInstance();
ResourceBundle bundle =
facesContext.getApplication().getResourceBundle(
facesContext, resourceBundleName);
return bundle.getString(resourceBundleKey);
}
</code></pre>
<p>Also, here is a link to <a href="https://stackoverflow.com/questions/2668161/jsf-when-to-use-message-bundle-and-resource-bundle">another question</a> that explains the difference between "message" bundles and "resource" bundles.</p> | 6,272,972 | 3 | 0 | null | 2011-06-08 00:08:22.603 UTC | 10 | 2020-01-20 15:02:05.36 UTC | 2017-05-23 10:31:07.723 UTC | null | -1 | null | 346,112 | null | 1 | 32 | jsf|jsf-2|managed-bean|message-bundle | 66,371 | <p>You can get the full qualified bundle name of <code><message-bundle></code> by <a href="http://download.oracle.com/javaee/6/api/javax/faces/application/Application.html#getMessageBundle%28%29" rel="noreferrer"><code>Application#getMessageBundle()</code></a>. You can get the current locale by <a href="http://download.oracle.com/javaee/6/api/javax/faces/component/UIViewRoot.html#getLocale%28%29" rel="noreferrer"><code>UIViewRoot#getLocale()</code></a>. You can get a <code>ResourceBundle</code> out of a full qualified bundle
name and the locale by <a href="http://download.oracle.com/javase/6/docs/api/java/util/ResourceBundle.html#getBundle%28java.lang.String,%20java.util.Locale%29" rel="noreferrer"><code>ResourceBundle#getBundle()</code></a>.</p>
<p>So, summarized:</p>
<pre><code>FacesContext facesContext = FacesContext.getCurrentInstance();
String messageBundleName = facesContext.getApplication().getMessageBundle();
Locale locale = facesContext.getViewRoot().getLocale();
ResourceBundle bundle = ResourceBundle.getBundle(messageBundleName, locale);
// ...
</code></pre>
<hr>
<p><strong>Update</strong>: as per the mistake in the question, you actually want to get the bundle which is identified by the <code><base-name></code> of <code><resource-bundle></code>. This is unfortunately not directly available by a standard JSF API. You've either to hardcode the same base name in the code and substitute the <code>messageBundleName</code> in the above example with it, or to inject it as a managed property on <code><var></code> in a request scoped bean:</p>
<pre><code>@ManagedProperty("#{msg}")
private ResourceBundle bundle; // +setter
</code></pre> |
34,560,017 | httpclient call to webapi to post data not working | <p>I need to make a simple webapi call to post method with string argument.</p>
<p>Below is the code I'm trying, but when the breakpoint is hit on the webapi method, the received value is <code>null</code>.</p>
<pre><code>StringContent stringContent = new System.Net.Http.StringContent("{ \"firstName\": \"John\" }", System.Text.Encoding.UTF8, "application/json");
HttpClient client = new HttpClient();
HttpResponseMessage response = await client.PostAsync(url.ToString(), stringContent);
</code></pre>
<p>and server side code:</p>
<pre><code> // POST api/values
[HttpPost]
public void Post([FromBody]string value)
{
}
</code></pre>
<p>please help...</p> | 34,560,372 | 3 | 1 | null | 2016-01-01 21:01:38.18 UTC | 1 | 2022-03-03 10:36:38.303 UTC | 2016-01-01 22:10:12.467 UTC | null | 554,284 | null | 4,613,758 | null | 1 | 6 | c#|asp.net-web-api | 52,735 | <p>If you want to send a json to your Web API, the best option is to use a model binding feature, and use a Class, instead a string.</p>
<h3>Create a model</h3>
<pre><code>public class MyModel
{
[JsonProperty("firstName")]
public string FirstName { get; set; }
}
</code></pre>
<p>If you wont use the JsonProperty attribute, you can write property in lower case camel, like this</p>
<pre><code>public class MyModel
{
public string firstName { get; set; }
}
</code></pre>
<h3>Then change you action, change de parameter type to MyModel</h3>
<pre><code>[HttpPost]
public void Post([FromBody]MyModel value)
{
//value.FirstName
}
</code></pre>
<p>You can create C# classes automatically using Visual Studio, look this answer here <a href="https://stackoverflow.com/questions/34302845/deserialize-json-into-object-c-sharp/34303130#34303130">Deserialize JSON into Object C#</a></p>
<p>I made this following test code</p>
<h3>Web API Controller and View Model</h3>
<pre><code>using System.Web.Http;
using Newtonsoft.Json;
namespace WebApplication3.Controllers
{
public class ValuesController : ApiController
{
[HttpPost]
public string Post([FromBody]MyModel value)
{
return value.FirstName.ToUpper();
}
}
public class MyModel
{
[JsonProperty("firstName")]
public string FirstName { get; set; }
}
}
</code></pre>
<h3>Console client application</h3>
<pre><code>using System;
using System.Net.Http;
namespace Temp
{
public class Program
{
public static void Main(string[] args)
{
Console.WriteLine("Enter to continue");
Console.ReadLine();
DoIt();
Console.ReadLine();
}
private static async void DoIt()
{
using (var stringContent = new StringContent("{ \"firstName\": \"John\" }", System.Text.Encoding.UTF8, "application/json"))
using (var client = new HttpClient())
{
try
{
var response = await client.PostAsync("http://localhost:52042/api/values", stringContent);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine(result);
}
catch (Exception ex)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine(ex.Message);
Console.ResetColor();
}
}
}
}
}
</code></pre>
<h3>Output</h3>
<pre><code>Enter to continue
"JOHN"
</code></pre>
<p><a href="https://i.stack.imgur.com/iM5eZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iM5eZ.png" alt="Code output"></a></p> |
34,608,361 | How to reset form validation on submission of the form in ANGULAR 2 | <p>I have to reset my form along with validation. is there any method to reset the state of form from ng-dirty to ng-pristine.</p> | 34,608,522 | 20 | 2 | null | 2016-01-05 09:38:03.553 UTC | 7 | 2022-06-01 08:05:19.033 UTC | null | null | null | null | 5,350,006 | null | 1 | 56 | angular|angular2-forms | 108,072 | <p>There doesn't seem to be support for that yet.
A workaround I have seen is to recreate the form after submit which is obviously cumbersome and ugly.</p>
<p>See also </p>
<ul>
<li><a href="https://github.com/angular/angular/issues/6196" rel="noreferrer">https://github.com/angular/angular/issues/6196</a></li>
<li><a href="https://github.com/angular/angular/issues/4933" rel="noreferrer">https://github.com/angular/angular/issues/4933</a></li>
<li><a href="https://github.com/angular/angular/issues/5568" rel="noreferrer">https://github.com/angular/angular/issues/5568</a></li>
<li><a href="https://github.com/angular/angular/issues/4914" rel="noreferrer">https://github.com/angular/angular/issues/4914</a></li>
</ul> |
50,428,046 | Vue.js open modal by click of a button | <p>How to use a button to show the modal in other components? For example, I have the following components:</p>
<p>info.vue</p>
<pre><code><template>
<div class="container">
<button class="btn btn-info" @click="showModal">show modal</button>
<example-modal></example-modal>
</div>
</template>
<script>
import exampleModal from './exampleModal.vue'
export default {
methods: {
showModal () {
// how to show the modal
}
},
components:{
"example-modal":exampleModal
}
}
</script>
</code></pre>
<p>exampleModal.vue</p>
<pre><code><template>
<!-- Modal -->
<div class="modal fade" id="exampleModal" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="exampleModalLabel">Modal title</h5>
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">&times;</span>
</button>
</div>
<div class="modal-body">
hihi
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary">Save changes</button>
</div>
</div>
</div>
</div>
</template>
</code></pre>
<p>How to show the modal from exampleModal.vue? I know that I can
use data-toggle and data-target to show the modal, like this: </p>
<p><code><button class="btn btn-info" data-toggle="modal" data-target="#exampleModal">show modal</button></code></p>
<p>But is there any way to show the modal by using the method "showModal"?</p> | 50,431,015 | 2 | 2 | null | 2018-05-19 17:54:11.757 UTC | 2 | 2019-11-15 15:49:17.083 UTC | 2018-05-20 07:17:50.997 UTC | null | 9,386,067 | null | 3,717,119 | null | 1 | 9 | javascript|vue.js|vuejs2|bootstrap-4 | 64,102 | <p><del>According to your snippet, you're using a classic Bootstrap modal. You just need to use <code>data-toggle</code> and <code>data-target</code> attributes in that case:</del></p>
<pre><code><div id="app">
<div class="container">
<button class="btn btn-info" data-toggle="modal" data-target="#exampleModal">show modal</button>
<example-modal></example-modal>
</div>
</div>
</code></pre>
<hr>
<h3>Edit</h3>
<p>I misread the question so i edit my answer.</p>
<p>It's possible to open the modal with a custom method. You just need to find the element "#exampleModal" by using <code>refs</code> and then open the modal with a bootstrap method (<a href="http://getbootstrap.com/docs/4.0/getting-started/javascript/#programmatic-api" rel="noreferrer">Bootstrap Programmatic API</a>)</p>
<pre><code><div id="app">
<div class="container">
<button class="btn btn-info" @click="showModal">show modal</button>
<example-modal ref="modal"></example-modal>
</div>
</div>
</code></pre>
<pre class="lang-js prettyprint-override"><code>methods: {
showModal() {
let element = this.$refs.modal.$el
$(element).modal('show')
}
}
</code></pre>
<p><a href="https://jsfiddle.net/StpFlp_DDK/b96u796x/" rel="noreferrer">Fiddle example</a></p> |
32,420,145 | How to refresh the current page | <p>I'm customizing a module in Drupal and in that module I'd like to refresh the current page right after an if statement, how can I refresh the current page in PHP</p> | 32,420,164 | 4 | 1 | null | 2015-09-06 04:48:21.833 UTC | 1 | 2022-05-23 05:48:05.45 UTC | null | null | null | null | 4,026,500 | null | 1 | 0 | php|drupal | 41,107 | <p>You can't in PHP if you've already outputted something (since all <code>headers</code> must be sent before output begins).</p>
<p>If you haven't outputted:</p>
<pre><code>header('Location:: current_page_url');
or
header("Refresh:0");
</code></pre>
<p>Better way is to do it through Javascript with the <code>window</code> object</p>
<pre><code>$window.location.reload();
</code></pre> |
6,181,312 | How to fix the uninitialized constant Rake::DSL problem on Heroku? | <p>I am getting errors similar to the ones <a href="https://stackoverflow.com/questions/6091964/delayed-job-queue-not-being-processed-on-heroku">in</a> <a href="https://stackoverflow.com/questions/6085610/rails-rake-problems-uninitialized-constant-rakedsl">these</a> <a href="https://stackoverflow.com/questions/5287121/undefined-method-task-using-rake-0-9-0/5290331#5290331">questions</a>, except mine are occuring on <a href="http://www.heroku.com" rel="nofollow noreferrer">Heroku</a>:</p>
<pre><code>2011-05-30T09:03:29+00:00 heroku[worker.1]: Starting process with command: `rake jobs:work`
2011-05-30T09:03:30+00:00 app[worker.1]: (in /app)
2011-05-30T09:03:30+00:00 heroku[worker.1]: State changed from starting to up
2011-05-30T09:03:33+00:00 app[worker.1]: rake aborted!
2011-05-30T09:03:33+00:00 app[worker.1]: uninitialized constant Rake::DSL
2011-05-30T09:03:33+00:00 app[worker.1]: /app/.bundle/gems/ruby/1.9.1/gems/rake-0.9.0/lib/rake/tasklib.rb:8:in `<class:TaskLib>'
</code></pre>
<p>The answer in those questions seems to be to specify <code>gem 'rake', '0.8.7'</code> because the 0.9 version causes the problem.</p>
<p>When I try to add <code>gem 'rake', '0.8.7'</code> to my gemfile and push to Heroku I get this error:</p>
<pre><code>Unresolved dependencies detected; Installing...
You have modified your Gemfile in development but did not check
the resulting snapshot (Gemfile.lock) into version control
You have added to the Gemfile:
* rake (= 0.8.7)
FAILED: http://devcenter.heroku.com/articles/bundler
! Heroku push rejected, failed to install gems via Bundler
error: hooks/pre-receive exited with error code 1
To [email protected]:my_app.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to '[email protected]:my_app.git'
</code></pre>
<p>My gemfile normally works fine on Heroku. What should I do?</p> | 6,263,256 | 5 | 1 | null | 2011-05-30 22:40:02.703 UTC | 32 | 2012-01-22 21:02:00.82 UTC | 2017-05-23 12:15:32.357 UTC | null | -1 | null | 173,634 | null | 1 | 102 | ruby-on-rails|ruby-on-rails-3|heroku|rake | 45,114 | <p>Put this in your Rakefile <strong>above</strong> require 'rake':</p>
<pre><code>require 'rake/dsl_definition'
</code></pre> |
5,683,225 | how to generate a static html file from the output of a php script, using php | <p>I would like to generate a static html page from a php file and save it from an other php script. That script runs a bunch of echo functions, which when viewed in a browser is a nice html page. But when I run file_get_contents it opens that file as a file on the filesystem, not as a file in an url. </p>
<p>Do I need to call file_get_contents in a localhost/site/categories.php way? How can I get this path? This is the wrong code:</p>
<pre><code><?php
$file = file_get_contents("categories.php");
file_put_contents("categories.html", $file);
?>
</code></pre> | 5,683,241 | 6 | 0 | null | 2011-04-15 23:14:32.097 UTC | 9 | 2020-08-10 17:10:11.563 UTC | null | null | null | null | 518,169 | null | 1 | 15 | php|file|static|get | 20,687 | <p>To get the finished output, you need to use the PHP url wrappers functionality and request it over the webserver. Then it's as easy as:</p>
<pre><code>copy("http://localhost/site/categories.php", "categories.html");
</code></pre> |
6,116,053 | JavaScript Library to Bridge IndexedDB and WebSQL | <p>I'm curious if there is a library or a project out there to provide a generic interface to IndexedDB or to WebSQL, depending on user's browser's support. If they're using Chrome/Safari, use WebSQL, if they're using Firefox or Internet Explorer, use IndexedDB.</p>
<p>The poster of <a href="https://stackoverflow.com/questions/3154795/indexdb-websql-in-4-months">this question</a> appears to have homegrown a solution, but did not provide any source code.</p> | 6,295,233 | 6 | 0 | null | 2011-05-24 19:53:27.35 UTC | 20 | 2017-02-17 00:33:45.493 UTC | 2017-05-23 12:18:19.683 UTC | null | -1 | null | 77,382 | null | 1 | 17 | javascript|web-sql|indexeddb | 19,532 | <p>You might want to go with <a href="http://brian.io/lawnchair/adapters/" rel="nofollow noreferrer">Lawnchair</a>, which is quite well known, as mentioned by Guido Tapia in the question that you link to. </p>
<p>Either that, or use his <a href="http://picnet.github.com/picnet_closure_repo/demos/picnet.data.DataManager.html" rel="nofollow noreferrer">picnet.data.DataManager</a> solution.</p>
<p>Also take a look at <a href="https://github.com/coresmart/persistencejs" rel="nofollow noreferrer">persistence.js</a>.</p> |
6,059,643 | Efficient Linq Enumerable's 'Count() == 1' test | <p>Similar to this <a href="https://stackoverflow.com/questions/4958492/sql-making-count-1-efficient">question</a> but rephrased for Linq:</p>
<p>You can use <code>Enumerable<T>.Any()</code> to test if the enumerable contains data. But what's the efficient way to test if the enumerable contains a single value (i.e. <code>Enumerable<T>.Count() == 1</code>) or greater than a single value (i.e. <code>Enumerable<T>.Count() > 1</code>) without using an expensive count operation?</p> | 6,059,711 | 7 | 0 | null | 2011-05-19 13:43:24.993 UTC | 6 | 2019-01-07 10:12:25.88 UTC | 2017-05-23 11:54:50.79 UTC | null | -1 | null | 152,480 | null | 1 | 31 | c#|linq | 4,949 | <pre><code>int constrainedCount = yourSequence.Take(2).Count();
// if constrainedCount == 0 then the sequence is empty
// if constrainedCount == 1 then the sequence contains a single element
// if constrainedCount == 2 then the sequence has more than one element
</code></pre> |
5,836,710 | CSS / Javascript Show / Hide DIV using a CSS class? | <p>I've googled around and found many scripts for hiding and showing DIV contents, as a toggle on button click. </p>
<p>But they are work using ID's.</p>
<p>I would like to do the same thing BUT I want to use a class instead of an id so that if I want to have 20 DIV's that toggle ... Hide / Show I don't have to add extra code.</p>
<p>Here is some code:</p>
<pre><code><script language="javascript">
function toggle() {
var ele = document.getElementById("toggleText");
var text = document.getElementById("displayText");
if(ele.style.display == "block") {
ele.style.display = "none";
text.innerHTML = "show";
}
else {
ele.style.display = "block";
text.innerHTML = "hide";
}
}
</script>
<a id="displayText" href="javascript:toggle();">show</a> <== click Here
<div id="toggleText" style="display: none"><h1>peek-a-boo</h1></div>
</code></pre>
<p>Can anyone help with this please?</p>
<p>Thanks</p> | 5,836,729 | 8 | 3 | null | 2011-04-29 19:51:04.087 UTC | 1 | 2021-06-24 17:09:24.84 UTC | null | null | null | null | 683,553 | null | 1 | 8 | javascript|jquery|html|css | 85,900 | <p>Is jquery an option? Hopefully so, because this does what you want:</p>
<p><a href="http://api.jquery.com/toggle/" rel="noreferrer">http://api.jquery.com/toggle/</a></p>
<pre><code>$(".class").toggle();
or
$(".class").show(); $(".class").hide();
</code></pre> |
5,992,082 | How to remove all whitespace from a string? | <p>So <code>" xx yy 11 22 33 "</code> will become <code>"xxyy112233"</code>. How can I achieve this?</p> | 5,992,152 | 9 | 0 | null | 2011-05-13 12:49:47.5 UTC | 51 | 2021-07-22 12:41:47.513 UTC | 2020-08-04 18:57:46.417 UTC | user4417050 | 10,248,678 | null | 293,843 | null | 1 | 195 | r|regex|string|r-faq | 393,514 | <p>In general, we want a solution that is vectorised, so here's a better test example:</p>
<pre><code>whitespace <- " \t\n\r\v\f" # space, tab, newline,
# carriage return, vertical tab, form feed
x <- c(
" x y ", # spaces before, after and in between
" \u2190 \u2192 ", # contains unicode chars
paste0( # varied whitespace
whitespace,
"x",
whitespace,
"y",
whitespace,
collapse = ""
),
NA # missing
)
## [1] " x y "
## [2] " β β "
## [3] " \t\n\r\v\fx \t\n\r\v\fy \t\n\r\v\f"
## [4] NA
</code></pre>
<hr>
<h3>The base R approach: <code>gsub</code></h3>
<p><a href="http://www.inside-r.org/r-doc/base/gsub" rel="noreferrer"><code>gsub</code></a> replaces all instances of a string (<code>fixed = TRUE</code>) or regular expression (<code>fixed = FALSE</code>, the default) with another string. To remove all spaces, use:</p>
<pre><code>gsub(" ", "", x, fixed = TRUE)
## [1] "xy" "ββ"
## [3] "\t\n\r\v\fx\t\n\r\v\fy\t\n\r\v\f" NA
</code></pre>
<p>As DWin noted, in this case <code>fixed = TRUE</code> isn't necessary but provides slightly better performance since matching a fixed string is faster than matching a regular expression.</p>
<p>If you want to remove all types of whitespace, use:</p>
<pre><code>gsub("[[:space:]]", "", x) # note the double square brackets
## [1] "xy" "ββ" "xy" NA
gsub("\\s", "", x) # same; note the double backslash
library(regex)
gsub(space(), "", x) # same
</code></pre>
<p><a href="http://www.inside-r.org/r-doc/base/Regex" rel="noreferrer"><code>"[:space:]"</code></a> is an R-specific regular expression group matching all space characters. <code>\s</code> is a language-independent regular-expression that does the same thing.</p>
<hr>
<h3>The <code>stringr</code> approach: <code>str_replace_all</code> and <code>str_trim</code></h3>
<p><code>stringr</code> provides more human-readable wrappers around the base R functions (though as of Dec 2014, the development version has a branch built on top of <code>stringi</code>, mentioned below). The equivalents of the above commands, using [<code>str_replace_all][3]</code>, are:</p>
<pre><code>library(stringr)
str_replace_all(x, fixed(" "), "")
str_replace_all(x, space(), "")
</code></pre>
<p><code>stringr</code> also has a <a href="http://www.inside-r.org/packages/CRAN/stringr/docs/str_trim" rel="noreferrer"><code>str_trim</code></a> function which removes only leading and trailing whitespace.</p>
<pre><code>str_trim(x)
## [1] "x y" "β β" "x \t\n\r\v\fy" NA
str_trim(x, "left")
## [1] "x y " "β β "
## [3] "x \t\n\r\v\fy \t\n\r\v\f" NA
str_trim(x, "right")
## [1] " x y" " β β"
## [3] " \t\n\r\v\fx \t\n\r\v\fy" NA
</code></pre>
<hr>
<h3>The <code>stringi</code> approach: <code>stri_replace_all_charclass</code> and <code>stri_trim</code></h3>
<p><code>stringi</code> is built upon the platform-independent <a href="http://site.icu-project.org/" rel="noreferrer">ICU library</a>, and has an extensive set of string manipulation functions. The <a href="http://docs.rexamine.com/R-man/stringi/stri_replace.html" rel="noreferrer">equivalents</a> of the above are:</p>
<pre><code>library(stringi)
stri_replace_all_fixed(x, " ", "")
stri_replace_all_charclass(x, "\\p{WHITE_SPACE}", "")
</code></pre>
<p>Here <a href="http://docs.rexamine.com/R-man/stringi/stringi-search-charclass.html" rel="noreferrer"><code>"\\p{WHITE_SPACE}"</code></a> is an alternate syntax for the set of Unicode code points considered to be whitespace, equivalent to <code>"[[:space:]]"</code>, <code>"\\s"</code> and <code>space()</code>. For more complex regular expression replacements, there is also <code>stri_replace_all_regex</code>.</p>
<p><code>stringi</code> also has <a href="http://docs.rexamine.com/R-man/stringi/stri_trim.html" rel="noreferrer">trim functions</a>.</p>
<pre><code>stri_trim(x)
stri_trim_both(x) # same
stri_trim(x, "left")
stri_trim_left(x) # same
stri_trim(x, "right")
stri_trim_right(x) # same
</code></pre> |
46,340,474 | Set a default reviewer | <p>I do pull requests in a repo where it's always the same person reviewing.
I would like to set him as the default reviewer so that I don't always have to choose him at each pull request, it'd be automatic.
How to do that ?</p> | 46,346,494 | 2 | 1 | null | 2017-09-21 09:37:40.823 UTC | 3 | 2021-12-30 22:59:56.203 UTC | null | null | null | null | 5,024,699 | null | 1 | 58 | github | 43,715 | <p>GitHub has the ability to set the default reviewer using a <a href="https://help.github.com/articles/about-codeowners/" rel="noreferrer"><code>CODEOWNERS</code></a> file.</p>
<blockquote>
<p>Code owners are automatically requested for review when someone opens a pull request that modifies code that they own. When someone with admin permissions has <a href="https://help.github.com/articles/enabling-required-reviews-for-pull-requests/" rel="noreferrer">enabled required reviews</a>, they can optionally require approval from a code owner.</p>
<p>To use a CODEOWNERS file, create a new file called CODEOWNERS in the root, <code>docs/</code>, or <code>.github/</code> directory of the repository, in the branch where you'd like to add the code owners.</p>
</blockquote> |
46,428,611 | Process finished with exit code 1 Spring Boot Intellij | <p>I've received the message "Process finished with exit code 1" when I run my project. I have tried several solutions but no topic is the same error as mine. My project doesn't execute any line of code, just abort the process.<a href="https://i.stack.imgur.com/isj5b.png" rel="noreferrer"><img src="https://i.stack.imgur.com/isj5b.png" alt="enter image description here" /></a></p> | 46,533,277 | 11 | 2 | null | 2017-09-26 14:04:16.957 UTC | 5 | 2022-06-29 05:34:27.057 UTC | 2020-12-09 20:05:20.11 UTC | null | 5,678,845 | null | 5,678,845 | null | 1 | 31 | tomcat|spring-boot|intellij-idea | 48,678 | <ol>
<li>Delete folder .idea from project folder.</li>
<li>Delete all .iml from project folder.</li>
</ol> |
43,974,824 | align-self: flex-end not moving item to bottom | <p>As you can see here: <a href="https://jsfiddle.net/0zq5a5xu/1/" rel="noreferrer">JSFiddle</a> </p>
<p>I want author div to be at the bottom of his parent container. I tried with <code>align-self: flex-end;</code> but it's not working. What am I doing wrong?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>.flexbox {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
.item {
display: flex;
flex-direction: column;
width: 100px;
border: 1px solid #000;
padding: 10px;
}
.item .author {
width: 100%;
align-self: flex-end;
border: 1px solid #000;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="flexbox">
<div class="item">
<div class="title">
Title
</div>
<div class="content">
Content
</div>
<div class="author">
Author
</div>
</div>
<div class="item">
<div class="title">
Title
</div>
<div class="content">
Content<br>Content
</div>
<div class="author">
Author
</div>
</div>
<div class="item">
<div class="title">
Title
</div>
<div class="content">
Content<br>Content<br>Content
</div>
<div class="author">
Author
</div>
</div>
</div></code></pre>
</div>
</div>
</p> | 43,974,846 | 2 | 1 | null | 2017-05-15 08:37:29.693 UTC | 5 | 2017-05-15 19:11:12.697 UTC | 2017-05-15 19:11:12.697 UTC | null | 3,597,276 | null | 2,490,424 | null | 1 | 35 | html|css|flexbox | 36,136 | <p>Try add to .author</p>
<pre><code>margin-top: auto;
</code></pre>
<p>You also need to remove flex-end.</p> |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.