_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d6501 | train | When looping an array with jQuery each should always use the arguments in callback to access the array element and use $.each method as opposed to $(selector).each
$.each(keyowrds, function(index, item)
{
var pattern = new RegExp("("+item+")", ["gi"]);
In code you are using if you log typeof this to console will find it is not actually a string
API reference: http://api.jquery.com/jQuery.each/ | unknown | |
d6502 | train | Well, sadly nobody answered. But I did it. Simply I did use a cookie to know where am I.
On laravel's www/index.php
if($_COOKIE['laravel']||$_SERVER['REQUEST_URI']=='/login'){
require __DIR__ . '/../private/bootstrap/autoload.php';
$app = require_once __DIR__ . '/../private/bootstrap/start.php';
$app->run();
}else{
define('WP_USE_THEMES', true);
require( dirname( __FILE__ ) . '/wordpress/wp-blog-header.php' );
}
On controllers/UserController.php function login()
if(Input::has('remember'))
setcookie ('laravel','yes!');
else
setcookie ('laravel','right!',time()+60*60*24*30);
On controllers/UserController.php function logout()
setcookie('laravel', null, -1);
Hope it helps somebody :) | unknown | |
d6503 | train | You need to put a handler entry in the web.config for static files to be served up. By default a 404 is returned for any requests that are not served via a managed handler.
If your file is in the root, then in the Orchard.Web web.config replace
<handlers accessPolicy="Script">
<!-- Clear all handlers, prevents executing code file extensions or returning any file contents. -->
<clear />
<!-- Return 404 for all requests via a managed handler. The URL routing handler will substitute the MVC request handler when routes match. -->
<add name="NotFound" path="*" verb="*" type="System.Web.HttpNotFoundHandler" preCondition="integratedMode" requireAccess="Script" />
<!-- WebApi -->
<remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" />
<remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" />
<remove name="ExtensionlessUrlHandler-Integrated-4.0" />
<add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" />
<add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" />
<add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
with
<handlers accessPolicy="Script, Read">
<!-- Clear all handlers, prevents executing code file extensions or returning any file contents. -->
<clear />
<add name="TestFile" path="test.html" verb="*" type="System.Web.StaticFileHandler" preCondition="integratedMode" requireAccess="Read" />
<!-- Return 404 for all requests via a managed handler. The URL routing handler will substitute the MVC request handler when routes match. -->
<add name="NotFound" path="*" verb="*" type="System.Web.HttpNotFoundHandler" preCondition="integratedMode" requireAccess="Script" />
<!-- WebApi -->
<remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" />
<remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" />
<remove name="ExtensionlessUrlHandler-Integrated-4.0" />
<add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" />
<add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" />
<add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
The path can use wildcards if you want to allow access to multiple files. | unknown | |
d6504 | train | You could unnest() the array of strings and then compare your input string with every element like you wanted.
You would get as many rows in the output as there are elements in your array. Since you need a clear indicator whether any of the comparison against array element yields true use bool_or() aggregate function:
select
bool_or('string12345' ilike arr_element||'%')
from
unnest(ARRAY['string123','something']::text[]) x(arr_element);
This would give you TRUE since:
SELECT 'string12345' ilike 'string123%' -- true
Note: bool_or() returns true if at least one input value is true, otherwise false.
A: select string ilike ANY(
select s || '%'
from unnest(text_array) s(s)
)
A: You can use EXISTS with unnest as follows:
SELECT EXISTS (SELECT * FROM unnest(text_array) a WHERE 'aaa1234' ILIKE a||'%') | unknown | |
d6505 | train | The AppDomain.ProcessExit event will fire before unloading the domain. If the code to run doesn't take too long, it could be used like this:
Imports System.EnterpriseServices
<Assembly: ApplicationName("MySender")>
<Assembly: ApplicationActivation(ActivationOption.Server)>
<ClassInterface(ClassInterfaceType.None), ProgId("MySender.Sender")> _
<Transaction(EnterpriseServices.TransactionOption.NotSupported)> _
Public Class Sender
Shared Sub New
AddHandler AppDomain.CurrentDomain.ProcessExit, AddressOf MyDisposalCode
End Sub
'....
Shared Sub MyDisposalCode(sender as Object, e as EventArgs)
'My disposal code
End Sub
End Class
It's important to notice that .Net will enforce a 2 second timeout on this code. | unknown | |
d6506 | train | The rule that you use to compile the JsClient.ml file is not good.
JsClient.byte:
ocamlbuild -use-menhir -menhir "menhir --external-tokens Lexer"
As you said, this file use the module Js so you need to compile with the same way than the file Formula.ml :
ocamlfind ocamlc -package js_of_ocaml -package js_of_ocaml.syntax \
-syntax camlp4o -linkpkg -o JsClient.byte JsClient.ml
js_of_ocaml JSClient.byte | unknown | |
d6507 | train | If you use flexbox on the row, you can use the order property to do this for you. (you can use media queries for adding display: flex to target mobile devices)
See how the positions of fourth-row and first-row are swapped in the demo below:
div.row {
display: flex;
flex-direction: column;
}
[id$='-row'] {
order: 2;
}
#first-row {
order: 3;
}
#fourth-row {
order: 1;
}
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet">
<div class="row">
<div class="col-sm-12" id="first-row">1</div>
<div class="col-sm-12" id="second-row">2</div>
<div class="col-sm-12" id="third-row">3</div>
<div class="col-sm-12" id="fourth-row">4</div>
</div>
A: Use Flex property to swap elements. Here is an article about it Reverse Elements Order.
Unable to demonstrate the mobile only part, so please use the below CSS.
@media only screen and (min-width : 480px) {
.row { display: flex; flex-direction: column-reverse; }
}
$('button').on('click', function(){
$('.row').toggleClass('row-reverse');
})
.row-reverse { display: flex; flex-direction: column-reverse; }
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="row">
<div class="col-sm-12" id="first-row">first</div>
<div class="col-sm-12" id="second-row">second</div>
<div class="col-sm-12" id="third-row">third</div>
<div class="col-sm-12" id="fourth-row">fourth</div>
</div>
<button>reverse/unreverse</button>
A: If you just want to change the look in different screen sizes, you can use default Bootstrap grid ordering:
https://getbootstrap.com/docs/3.3/css/#grid-column-ordering
Might be easier to see the responsive result here: https://codepen.io/anon/pen/vJQmXq
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
<div class="row">
<div class="col-sm-12 col-md-3 col-md-push-9" id="fourth-row">4</div>
<div class="col-sm-12 col-md-3" id="second-row">2</div>
<div class="col-sm-12 col-md-3" id="third-row">3</div>
<div class="col-sm-12 col-md-3 col-md-pull-9" id="first-row">1</div>
</div>
A: Here you go with a solution using jQuery https://jsfiddle.net/68bfxozm/1/
var updateDivOrder = function() {
if($(document).width() < 500){
var first = $('#first-row').clone();
var last = $('#fourth-row').clone();
$('.row div').first().remove();
$('.row div').last().remove();
$('.row div').first().before(last);
$('.row div').last().after(first);
} else {
var first = $('#first-row').clone();
var last = $('#fourth-row').clone();
$('.row div').first().remove();
$('.row div').last().remove();
$('.row div').first().before(first);
$('.row div').last().after(last);
}
}
updateDivOrder();
$(window).resize(updateDivOrder);
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="row">
<div class="col-sm-12" id="first-row">1</div>
<div class="col-sm-12" id="second-row">2</div>
<div class="col-sm-12" id="third-row">3</div>
<div class="col-sm-12" id="fourth-row">4</div>
</div>
My assumption is mobile width will be less than 500px.
Hope this will help you. | unknown | |
d6508 | train | According to the docs the default SizeAdjustPolicy is AdjustToContentsOnFirstShow so perhaps you are showing it and then populating it?
Either populate it first before showing it or try setting the policy to QComboBox::AdjustToContents.
Edit:
BTW I'm assuming that you have the QComboBox in a suitable layout, eg. QHBoxLayout, so that the size hint/policy is actually being used.
A: I was searching for a solution to only change the size of the dropdown menu of the combobox to fit the largest text without changing the size of the combobox itself.
Your suggestion (@Linoliumz) did help me find the solution. Here it is :
Suppose you have a combobox called cb:
C++:
width = cb->minimumSizeHint().width()
cb->view().setMinimumWidth(width)
PyQT :
width = cb.minimumSizeHint().width()
cb.view().setMinimumWidth(width)
A: Qt (4.6) online documentation has this to say about QComboBox:
enum SizeAdjustPolicy { AdjustToContents, AdjustToContentsOnFirstShow, AdjustToMinimumContentsLength, AdjustToMinimumContentsLengthWithIcon }
I would suggest
*
*ensuring the SizeAdjustPolicy is actually being used
*setting the enum to AdjustToContents. As you mention a .ui file I suggest doing that in Designer. Normally there shouldn't be anything fancy in your constructor at all concerning things you do in Designer. | unknown | |
d6509 | train | You need to call CoInitialize and CoUninitialize on the same thread, since they act on the calling thread. The OnTerminate event is always executed on the main thread.
So, remove your OnTerminate event handler, move that code into the thread, and so call CoUninitialize from the thread:
void __fastcall TThreadCamera::Execute()
{
FreeOnTerminate = true;
CoInitialize(NULL);
Camera = Variant::CreateObject("ASCOM.Simulator.Camera");
// code to operate on the camera goes here
CoUninitialize();
}
It would probably be prudent to protect the uninitialization inside a finally block.
A: In Delphi, if you need to call a thread termination code in the thread context, you should override the protected TThread.DoTerminate method instead of writing OnTerminate event handler.
A: The TThread.OnTerminate event is called in the context of the main UI thread. The virtual TThread.DoSynchronize() method, which the worker thread calls after Execute() exits, uses TThread.Synchronize() to call OnTerminate. DoTerminate() is always called, even if Execute() exits due to an uncaught exception, so overriding DoTerminate() is a good way to perform thread-specific cleanup.
CoInitialize() and CoUninitialize() must be called in the same thread. So, you must call CoUninitialize() inside of Execute(), or override DoTerminate(). I prefer the latter, as it reduces the need for using try/catch or try/__finally blocks in Execute() (an RAII solution, such as TInitOle in utilscls.h, is even better).
An apartment-threaded COM object can only be accessed in the context of the thread that creates it. So you must call the camera's CameraStateproperty and AbortExposure() procedure inside of Execute(), or override DoTerminate(), as well.
The TThread.Terminate() method simply sets the TThread.Terminated property to true, it does nothing else. It is the responsibility of Execute() to check the Terminated property periodically and exit as soon as possible. Your while that waits for the camera's ImageReady property to be true can, and should, check the thread's Terminated property so it can stop waiting when requested.
Try something more like this:
class TThreadCamera : public TThread
{
private:
bool init;
protected:
void __fastcall Execute();
void __fastcall DoTerminate();
public:
__fastcall TThreadCamera();
};
__fastcall TThreadCamera::TThreadCamera()
: TThread(false)
{
FreeOnTerminate = true;
}
void __fastcall TThreadCamera::Execute()
{
init = SUCCEEDED(CoInitialize(NULL));
if (!init) return;
Variant Camera = Variant::CreateObject("ASCOM.Simulator.Camera");
Camera.OlePropertySet("Connected", true);
Camera.OleProcedure("StartExposure", 60, true);
while (!Terminated)
{
if ((bool) Camera.OlePropertyGet("ImageReady"))
return;
Sleep(100);
}
if (Camera.OlePropertyGet("CameraState") == 2) // Exposure currently in progress
Camera.OleProcedure("AbortExposure");
}
void __fastcall TThreadCamera::DoTerminate()
{
if (init) CoUninitialize();
TThread::DoTerminated();
}
Or:
class TThreadCamera : public TThread
{
protected:
void __fastcall Execute();
public:
__fastcall TThreadCamera();
};
#include <utilcls.h>
__fastcall TThreadCamera::TThreadCamera()
: TThread(false)
{
FreeOnTerminate = true;
}
void __fastcall TThreadCamera::Execute()
{
TInitOle oleInit;
Variant Camera = Variant::CreateObject("ASCOM.Simulator.Camera");
Camera.OlePropertySet("Connected", true);
Camera.OleProcedure("StartExposure", 60, true);
while (!Terminated)
{
if ((bool) Camera.OlePropertyGet("ImageReady"))
return;
Sleep(100);
}
if (Camera.OlePropertyGet("CameraState") == 2) // Exposure currently in progress
Camera.OleProcedure("AbortExposure");
} | unknown | |
d6510 | train | If you organize your data in another format you can do the trick, as follows:
library(ggplot2)
data = data.frame(
ID = rep(c('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'), 2),
Start = c(39, 21, 28, 35, 35, 20, 21, 28, 46, 43, 49, 46, 48, 34, 37, 45),
End = c(69, 42, 52, 57, 57, 43, 42, 52, 87, 80, 92, 86, 90, 64, 69, 83),
Method = c(rep(1, 8), rep(2, 8))
)
data$Method = as.factor(data$Method)
ggplot(data) +
theme_minimal() +
coord_flip() +
geom_linerange(aes(x = ID, ymin = Start, ymax = End, colour = Method),
size = 1.5, position = position_dodge2(width = 0.5)) +
labs(x = 'ID', y = 'Age Range', title = 'Method Age Range Comparison')
Here is the output:
Note that I gather the starts together and the ends together. Then, I labeled each of them as method 1 or 2.
Also remember using position_dodge2() when you flip the coordinates. | unknown | |
d6511 | train | try
$(document).ready(function () {
$('.change').click(function () {
$('.now').hide('slow', function () {
$('.next').show('slow', function () {
$prev = $('.previous');
$now = $('.now');
$next = $('.next');
$prev.removeClass('previous').addClass('next');
$now.removeClass('now').addClass('previous');
$next.removeClass('next').addClass('now');
});
});
});
});
A: $('.change').click(function(){
var now = $('.now');
var next = $('.next');
var previous = $('.previous');
now.hide('slow', function(){
next.show('slow');
previous.removeClass('previous').addClass('next');
now.removeClass('now').addClass('previous');
next.removeClass('next').addClass('now');
});
});β
DEMO
A: $('.change').click(function() {
$('.now').attr('class', 'previous');
$('.next').attr('class', 'now');
$('.previous').attr('class', 'next');
}); | unknown | |
d6512 | train | I can get the function to run:
\d my_table
Table "public.my_table"
Column | Type | Collation | Nullable | Default
--------------+-----------------------------+-----------+----------+---------
other_column | character varying(100) | | |
updated_at | timestamp without time zone | | |
new_colum | character varying(100) | | |
first_name | character varying | | |
id | integer |
DO
$$
BEGIN
-- PREPARE QUERIES
DEALLOCATE ALL;
EXECUTE FORMAT('PREPARE q_test(text) AS
SELECT
first_name
FROM my_table
WHERE %I = 0', 'id');
END
$$;
UPDATE.
A version of function that iterates over field names and executes the query for each field name. Does away with the PREPARE/EXECUTE.
DO
$$
DECLARE
fld_name text;
BEGIN
FOREACH fld_name IN ARRAY array['id', 'first_name'] LOOP
RAISE NOTICE '%', fld_name;
EXECUTE FORMAT('SELECT
first_name
FROM my_table
WHERE %I IS NOT NULL', fld_name);
END LOOP;
END
$$;
NOTICE: id
NOTICE: first_name
DO | unknown | |
d6513 | train | This article has suggestions on when to use NoSQL DB's. Also this | unknown | |
d6514 | train | You can simply use the divisibleby filter:
{% if forloop.counter|divisibleby:"4" %}
....
{% endif %}
Update:
You have to use a counter+divisibleby filter in your template. Look at this template tag: Counter, it can help you.
Or
Filter out duplicate items (if possible) in the view before passing them to the template and use the divisibleby filter. | unknown | |
d6515 | train | Pass MEDIA_URL like to render like this
render(request,'music/song.html',{'MEDIA_URL': settings.MEDIA_URL}) and make sure to include from django.conf import settings in your views.py. | unknown | |
d6516 | train | As per this answer, only type: nfs (not type: nfs4) allows to use addr=<hostname>. | unknown | |
d6517 | train | Ok I seemed to have fixed it.
Basically, since I have 2 separate repositories on the remote server, I think the "git" user was failing because I hadn't registered an ssh keypair for the git user. That explains why one of my deploy.rb scripts was working properly, while this one wasn't.
In the link I posted in the question, one of the commenters pointed out the issue:
https://capistrano.lighthouseapp.com/projects/8716/tickets/56-query%5Frevision-unable-to-resolve-revision-for-head-on-repository
Note this error is also displayed if
you are using multiple github keys per
http://capistrano.lighthouseapp....
and you do not have these keys and a
corresponding entry in your
.ssh/config on the workstation you're
running the deploy from. so the
ls-remote is run locally. is there a
way to reference the repository at
github.com for this request while the
remote deploy uses
git@github-project1:user/project1.git
Also, see the following link for more details, since the whole ssh issue would apply even if you're not using github.
http://github.com/guides/multiple-github-accounts
A: Both your workstation and your server must be able to reach the repository at the address specified, if not then you may have to set :local_repository to how you access it from your workstaion, and :repository to be how your servers should access it.
A: For me Capistrano deployments with Git only seem to work when setting set :copy_cache, true
A: I've only used capistrano with git once, but never used or seen the use of ssh:// in the repository definition.
Try using set :repository, "[email protected]/home/git/project1.git" instead
A: Make sure the branch you are deploying from exists.
set :branch, "upgrade-to-2013.4.3"
is not equal to
set :branch, "upgrade-to-2013.3.4" | unknown | |
d6518 | train | As JB Nizet pointed out, the code seems to confuse post ids and comment ids:
Stream<E> ofAtLeastComments(Stream<E> comments, Stream<Post> posts, Integer count) {
Map<Integer, List<Post>> posts = posts.collect(Collectors.groupingBy(Post::getId));
return comments.filter(comment -> posts.get(comment.getPostId()).size() >= count);
// ^^^^
}
But the method name sounds more like you want this:
<E extends Comment> Stream<Post> ofAtLeastComments(Stream<E> comments, Stream<Post> posts, Integer count) {
// Number of comments per post ID
Map<Integer, Long> commentCounts = comments
.collect(Collectors.groupingBy(Comment::getPostId, Collectors.counting()));
return posts.filter(post -> commentCounts.getOrDefault(post.getId(), 0L) >= count);
} | unknown | |
d6519 | train | There is no way of sorting parameters automatically that I'm aware of. You can arrange them via Drag&Drop in your project's config manually, of course.
You can group them using the Parameter Separator Plugin:
Meta Data β [β] This build is parameterized β Add Parameter β Parameter Separator | unknown | |
d6520 | train | The binaries published by google need to find libcudart.so.7.0 in the path library , you just need to add it to LD_LIBRARY_PATH by something like
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/olivier/digits-2.0/lib/cuda"
that you put in your .bashrc
A: On an optimus laptop (running Manjaro Linux) it's possible to run TensorFlow with cuda acceleration by starting a python console with optirun:
$optirun python
I detailled the way to do it here. | unknown | |
d6521 | train | I think you'd better use the following to prepare, the process of preparing is to void the injection
$sql = 'SELECT * FROM employeeTable WHERE firstName = :firstName';
$sth = $conn->prepare($sql);
$sth -> bindParam(':firstName', $firstName);
$sth -> execute();
$result = $sth->fetchAll(PDO::FETCH_OBJ);
foreach ($result as $key => $value) {
echo $value->lastName, $value->email;
}
A: Just remember to don't directly concatenate post variables to your query, just use prepared statements. And after the execution of prepared statements, you need to fetch the results:
$select = $conn->prepare('SELECT * FROM employeeTable WHERE firstName = :firstName');
$select->execute(array(':firstName' => $_POST["firstName"));
while($row = $select->fetch(PDO::FETCH_ASSOC))
echo $row['lastName'].' '.$row['email'];
}
Here is a good read:
http://wiki.hashphp.org/PDO_Tutorial_for_MySQL_Developers | unknown | |
d6522 | train | Use regx in this case
import re
str = open('a.txt', 'r').read()
m = re.search('(?<=hostname)(.*)', str)
print ("hostname",(m.groups()))
If you dont get output. please drop text file screenshot | unknown | |
d6523 | train | To start, the view scope is bound to a particular page/view. Multiple views won't share the same view scoped bean. The view scope starts with an initial GET request and stops when a POST action navigates with a non-null return value.
There are in general the following scenarios, depending on whether the browser is instructed to cache the page or not and the JSF state saving configuration. I'll assume that the navigation between those pages took place by a POST request (as it sounds much like the "Wizard" scenario).
When the back button is pressed:
*
*If browser is instructed to save the page in cache, then browser will load the page from the cache. All previously entered input values will reappear from the browser cache (thus not from the view scoped bean in the server side!). The behavior when you perform a POST request on this page depends further on the javax.faces.STATE_SAVING_METHOD configuration setting:
*
*If set to server (default), then a ViewExpiredException will occur, because the view state is trashed at the server side right after POST navigation from one to other page.
*If set to client, then it will just work, because the entire view state is contained in a hidden input field of the form.
*Or, if browser is instructed to not save the page in cache, then browser will display a browser-default "Page expired" error page. Only when the POST-redirect-GET pattern was applied for navigation, then the browser will send a brand new GET request on the same URL as the redirect URL. All previously entered input values will by default get cleared out (because the view scoped bean is recreated), but if the browser has "autocomplete" turned on (configureable at browser level), then it will possibly autofill the inputs. This is disableable by adding autocomplete="off" attribute to the input components. When you perform a POST request on this page, it will just work regardless of the JSF state saving method.
It's easier to perform the "Wizard" scenario on a single view which contains conditionally rendered steps and offer a back button on the wizard section itself.
See also:
*
*javax.faces.application.ViewExpiredException: View could not be restored
*What scope to use in JSF 2.0 for Wizard pattern? | unknown | |
d6524 | train | You can set the .out-buffer of a handle (such as $*OUT or $*ERR) to 0:
$ ./run raku -e '$*OUT.out-buffer = 0; react whenever Supply.interval: 1 { .say }'
PID: 11340
OUT: 0
OUT: 1
OUT: 2
OUT: 3
OUT: 4
Done
A: Proc::Async itself isn't performing buffering on the received data. However, spawned processes may do their own depending on what they are outputting to, and that's what is being observed here.
Many programs make decisions about their output buffering (among other things, such as whether to emit color codes) based on whether the output handle is attached to a TTY (a terminal). The assumption is that a TTY means a human is going to be watching the output, and thus latency is preferable to throughput, so buffering is disabled (or restricted to line buffering). If, on the other hand, the output is going to a pipe or a file, then the assumption is that latency is not so important, and buffering is used to achieve a significant throughput win (a lot less system calls to write data).
When we spawn something with Proc::Async, the standard output of the spawned process is bound to a pipe - which is not a TTY. Thus the invoked program may use this to decide to apply output buffering.
If you're willing to have another dependency, then you can invoke the program via. something that fakes up a TTY, such as unbuffer (part of the expect package, it seems). Here's an example of a program that is suffering from buffering:
my $proc = Proc::Async.new: 'raku', '-e',
'react whenever Supply.interval(1) { .say }';
react whenever $proc.stdout {
.print
}
We only see a 0 and then have to wait a long time for more output. Running it via unbuffer:
my $proc = Proc::Async.new: 'unbuffer', 'raku', '-e',
'react whenever Supply.interval(1) { .say }';
react whenever $proc.stdout {
.print
}
Means that we see a number output every second.
Could Raku provide a built-in solution to this some day? Yes - by doing the "magic" that unbuffer itself does (I presume allocating a pty - kind of a fake TTY). This isn't trivial - although it is being explored by the libuv developers; at least so far as Rakudo on MoarVM goes, the moment there's a libuv release available offering such a feature, we'll work on exposing it. | unknown | |
d6525 | train | The benefit of having your own Exception class is that you, as the author of the library, can catch it and handle it.
try {
if(somethingBadHappens) {
throw MyCustomException('msg',0)
}
} catch (MyCustomException $e) {
if(IcanHandleIt) {
handleMyCustomException($e);
} else {
//InvalidArgumentException is used here as an example of 'common' exception
throw new InvalidArgumentException('I couldnt handle this!',1,$e);
}
}
A: Well, custom exception classes lets you route your errors properly for better handling.
if you have a class
class Known_Exception extends Exception {}
and a try catch block like this:
try {
// something known to break
} catch (Known_Exception $e) {
// handle known exception
} catch (Exception $e) {
// Handle unknown exception
}
Then you know that Exception $e is an unknown error situation and can handle that accordingly, and that is pretty useful to me. | unknown | |
d6526 | train | For most CPUs - and I believe Z80 falls in this category - the length of an instruction is implicit.
That is, you must decode the instruction in order to figure out how long it is.
A: If you're writing an emulator you don't really ever need to be able to obtain a full disassembly. You know what the program counter is now, you know whether you're expecting a fresh opcode, an address, a CB page opcode or whatever and you just deal with it. What people end up writing, in effect, is usually a per-opcode recursive descent parser.
To get to a full disassembler, most people impute some mild simulation, recursively tracking flow. Instructions are found, data is then left by deduction.
Not so much on the GB where storage was plentiful (by comparison) and piracy had a physical barrier, but on other platforms it was reasonably common to save space or to effect disassembly-proof code by writing code where a branch into the middle of an opcode would create a multiplexed second stream of operations, or where the same thing might be achieved by suddenly reusing valid data as valid code. One of Orlando's 6502 efforts even re-used some of the loader text β regular ASCII β as decrypting code. That sort of stuff is very hard to crack because there's no simple assembly for it and a disassembler therefore usually won't be able to figure out what to do heuristically. Conversely, on a suitably accurate emulator such code should just work exactly as it did originally. | unknown | |
d6527 | train | Yes, you can do this at the database level using Oracle auditing. See here for good writeup and examples of its use. | unknown | |
d6528 | train | Reading the comments, it sounds like every time you select a different value from one of the three drop downs, you want to run three macros, depending on the values selected. If that is the case, then you don't need to iterate through the Target cells (only one can be assigned, using the drop down anyway).
All you need to do is identify that one of the three cells has been changed and then based on the three specific cells, run the corresponding macros.
Private Sub Worksheet_Change(ByVal Target As Range)
' This is all that you have to check, before deciding to run the macros
If Intersect(Target, Range("C4,C23,C32")) Is Nothing Then Exit Sub
Select Case Range("C4").Value
Case "Ecommerce"
Call Ecommerce
Case "Non-Commerce"
Call NonCommerce
Case "Ecommerce & Non-Commerce", "Select Ecommerce/Non-Commerce"
Both
End Select
Select Case Range("C23").Value
Case "Select Year"
Call SelectYear
Case "2020"
Call Twentytwenty
Case "2021"
Call TwentyOne
Case "2022"
Call TwentyTwo
Case "2023"
Call TwentyThree
Case "2024"
Call TwentyFour
Case "2025"
Call TwentyFive
End Select
Select Case Range("C32").Value
Case "Select PPC"
Call SelectPPC
Case "PPC 2"
Call PPCTwo
Case "PPC 3"
Call PPCThree
Case "PPC 4"
Call PPCFour
Case "PPC 5"
Call PPCFive
Case "PPC 6"
Call PPCSix
Case "PPC 7"
Call PPCSeven
End Select
End Sub
Or better yet, listen to BruceWayne and you end up with something like this:
Private Sub Worksheet_Change(ByVal Target As Range)
' This is all that you have to check, before deciding to run the macros
If Intersect(Target, Range("C4,C23,C32")) Is Nothing Then Exit Sub
Select Case Range("C4").Value
Case "Ecommerce"
Ecommerce
Case "Non-Commerce"
NonCommerce
Case "Ecommerce & Non-Commerce", "Select Ecommerce/Non-Commerce"
Both
End Select
If IsNumeric(Range("C23").Value) Then SelectYear CInt(Range("C23").Value)
If IsNumeric(Mid(Range("C32").Value, 5)) Then SelectPPC CInt(Mid(Range("C32").Value, 5))
End Sub
Sub SelectYear(Year As Integer)
' Do stuff
End Sub
Sub SelectPPC(Value As Integer)
' Do stuff
End Sub
FYI, You don't really need to use Call. The only real difference that it makes is whether parenthesis are required or not, when a Sub-routine has parameters. | unknown | |
d6529 | train | A couple thoughts:
*
*The various Read* methods of streamreader require you to ensure that your app has completed before they run, otherwise you may get no output depending on timing issues. You may want to look at the Process.WaitForExit() function if you want to use this route.
Also, unless you have a specific reason for allocating buffers (pain in the butt IMO) I would just use readline() in a loop, or since the process has exited, ReadToEnd() to get the whole output. Neither requires you to have to do arrays of char, which opens you up to math errors with buffer sizes.
*If you want to go asynchronous and dump output as you run, you will want to use the BeginOutputReadLine() function (see MSDN)
*Don't forget that errors are handled differently, so if for any reason your app is writing to STDERR, you will want to use the appropriate error output functions to read that output as well. | unknown | |
d6530 | train | Cancellation pending does only tell the DoWork method that the starting thread want's it to abort. It does not automatically stop anything. See this example of a DoWork method:
private void DoWork(object sender, DoWorkEventArgs e){
foreach( ... )
{
//do some work
if( myBackgroundWorker.CancellationPending )
{
return;
}
}
}
The other possibility (your case is like this )
private void DoWork(object sender, DoWorkEventArgs e){
//perform a big task towards the database here
}
That last case does not give you any entry point to check for cancellation requests, so the ony option is to locate the thread and kill it without giving it the chance to shut down in a good manner, and is not a recommended pattern.
Your best bet is to divide the work inside DoWork in several batches, and check for cancellation requests between each of the sub tasks. | unknown | |
d6531 | train | Summary
*
*HttpClient can only be injected inside Typed clients
*for other usages, you need IHttpClientFactory
*In both scenarios, the lifetime of HttpClientMessageHandler is managed by the framework, so you are not worried about (incorrectly) disposing the HttpClients.
Examples
In order to directly inject HttpClient, you need to register a specific Typed service that will receive the client:
services.AddHttpClient<GithubClient>(c => c.BaseAddress = new System.Uri("https://api.github.com"));
Now we can inject that inside the typed GithubClient
public class GithubClient
{
public GithubClient(HttpClient client)
{
// client.BaseAddress is "https://api.github.com"
}
}
You can't inject the HttpClient inside AnotherClient, because it is not typed to AnotherClient
public class AnotherClient
{
public AnotherClient(HttpClient client)
{
// InvalidOperationException, can't resolve HttpClient
}
}
You can, however:
1. Inject the IHttpClientFactory and call CreateClient(). This client will have BaseAddress set to null.
2. Or configure AnotherClient as a different typed client with, for example, a different BaseAdress.
Update
Based on your comment, you are registering a Named client. It is still resolved from the IHttpClientFactory.CreateClient() method, but you need to pass the 'name' of the client
Registration
services.AddHttpClient("githubClient", c => c.BaseAddress = new System.Uri("https://api.github.com"));
Usage
// note that we inject IHttpClientFactory
public HomeController(IHttpClientFactory factory)
{
this.defaultClient = factory.CreateClient(); // BaseAddress: null
this.namedClient = factory.CreateClient("githubClient"); // BaseAddress: "https://api.github.com"
}
A: Sadly I cannot comment, but only Post an answer. Therefore I suggest you should check out the following Links:
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests
https://aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/
Regarding your Questions it more or Less boils down to this:
Q1 -> IHttpClientFactory handles the connection pools of HttpClient instances and this will help you regarding load and dispose problems as discribed in the links, if the HttpClient is used wrong.
Q2 -> yes you should use factory.create client according to microsoft docs | unknown | |
d6532 | train | $(...) is a command substitution. Command substitution executes the commands inside it. Here it tries to execute 1.0-0.1 as a command.
The $((...)) does arithmetic expansion, note the double braces.
While the following will trigger arithmetic expansion:
z=$(($brightness-0.1))
No, shell does not support floating point arithmetic, only whole numbers. Research other questions on this site how to do floating point arithmetic in shell. Because arithmetic expansion expands also variables, you can remove the $ from inside. For example pipe the string to be calculated to bc (<<< is a here string):
z=$(bc <<<"$brightness - 0.1")
Notes:
*
*And while were at it, do not use backticks `...` at all. Use $(...) instead. brightness=$(xrandr --verbose | grep -m 1 -i brightness | cut -f2 -d ' ')
*UPPER CASE VARIABLES are by convention reserved for exported variables, like IFS, LINES, COLUMNS etc. Use lower case variables in your scripts. | unknown | |
d6533 | train | Just use ::toupper instead of std::toupper. That is, toupper defined in the global namespace, instead of the one defined in std namespace.
std::transform(s.begin(), s.end(), std::back_inserter(out), ::toupper);
Its working : http://ideone.com/XURh7
Reason why your code is not working : there is another overloaded function toupper in the namespace std which is causing problem when resolving the name, because compiler is unable to decide which overload you're referring to, when you simply pass std::toupper. That is why the compiler is saying unresolved overloaded function type in the error message, which indicates the presence of overload(s).
So to help the compiler in resolving to the correct overload, you've to cast std::toupper as
(int (*)(int))std::toupper
That is, the following would work:
//see the last argument, how it is casted to appropriate type
std::transform(s.begin(), s.end(), std::back_inserter(out),(int (*)(int))std::toupper);
Check it out yourself: http://ideone.com/8A6iV
A: Problem
std::transform(
s.begin(),
s.end(),
std::back_inserter(out),
std::toupper
);
no matching function for call to βtransform(__gnu_cxx::__normal_iterator<char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, __gnu_cxx::__normal_iterator<char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::back_insert_iterator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, <unresolved overloaded function type>)β
This is a misleading error; the interesting part is not that there's "no matching function" for the call, but why there's no matching function.
The why is that you're passing a function reference of an "<unresolved overloaded function type>" as an argument, and GCC prefers to error on the call rather than on this overload resolution failure.
Explanation
First, you should consider how the C library is inherited in C++. <ctype.h> has a function int toupper(int).
C++ inherits this:
[n3290: 21.7/1]: Tables 74, 75, 76, 77, 78, and 79 describe headers
<cctype>, <cwctype>, <cstring>, <cwchar>, <cstdlib>
(character conversions), and <cuchar>, respectively.
[n3290: 21.7/2]: The contents of these headers shall be the same as
the Standard C Library headers <ctype.h>, <wctype.h>,
<string.h>, <wchar.h>, and <stdlib.h> and the C Unicode TR
header <uchar.h>, respectively [..]
[n3290: 17.6.1.2/6]:Names that are defined as functions in C shall
be defined as functions in the C++ standard library.
But using <ctype.h> is deprecated:
[n3290: C.3.1/1]: For compatibility with the Standard C library, the
C++ standard library provides the 18 C headers (D.5), but their use is
deprecated in C++.
And the way to access the C toupper is through the C++ backwards-compatibility header <cctype>. For such headers, the contents are either moved or copied (depending on your implementation) into the std namespace:
[n3290: 17.6.1.2/4]: [..] In the C++ standard library, however, the declarations
(except for names which are defined as macros in C) are within
namespace scope (3.3.6) of the namespace std. It is unspecified
whether these names are first declared within the global namespace
scope and are then injected into namespace std by explicit
using-declarations (7.3.3).
But the C++ library also introduces a new, locale-specific function template in header <locale>, that's also called toupper (of course, in namespace std):
[n3290: 22.2]: [..] template <class charT> charT toupper(charT c,
const locale& loc); [..]
So, when you use std::toupper, there are two overloads to choose from. Since you didn't tell GCC which function you wish to use, the overload cannot be resolved and your call to std::transform cannot be completed.
Disparity
Now, the OP of that original question did not run into this problem. He likely did not have the locale version of std::toupper in scope, but then again you didn't #include <locale> either!
However:
[n3290: 17.6.5.2]: A C++ header may include other C++ headers.
So it just so happens that either your <iostream> or your <algorithm>, or headers that those headers include, or headers that those headers include (etc), lead to the inclusion of <locale> on your implementation.
Solution
There are two workarounds to this.
*
*You may provide a conversion clause to coerce the function pointer into referring to the overload that you wish to use:
std::transform(
s.begin(),
s.end(),
std::back_inserter(out),
(int (*)(int))std::toupper // specific overload requested
);
*You may remove the locale version from the overload set by explicitly using the global toupper:
std::transform(
s.begin(),
s.end(),
std::back_inserter(out),
::toupper // global scope
);
However, recall that whether or not this function in <cctype> is available is unspecified ([17.6.1.2/4]), and using <ctype.h> is deprecated ([C.3.1/1]).
Thus, this is not the option that I would recommend.
(Note: I despise writing angle brackets as if they were part of header names β they are part of #include syntax, not header names β but I've done it here for consistency with the FDIS quotes; and, to be honest, it is clearer...)
A: std::transform(s.begin(), s.end(), s.begin(),
std::bind(&std::toupper<char>, std::placeholders::_1, std::locale()));
If you use vc toolchain,please include locale | unknown | |
d6534 | train | Case your mapping
PUT /index
{
"mappings": {
"doc": {
"properties": {
"querySearched": {
"type": "text",
"fielddata": true
}
}
}
}
}
Your query should looks like
GET index/_search
{
"size": 0,
"aggs": {
"result": {
"terms": {
"field": "querySearched",
"size": 10
}
}
}
}
You should add fielddata:true in order to enable aggregation for text type field more of that
"size": 10, => limit to 10
After a short discussion with @Kamal i feel obligated to let you know that if you choose to enable fielddata:true you must know that
it can consume a lot of heap space.
From the link I've shared:
Fielddata can consume a lot of heap space, especially when loading high cardinality text fields. Once fielddata has been loaded into the heap, it remains there for the lifetime of the segment. Also, loading fielddata is an expensive process which can cause users to experience latency hits. This is why fielddata is disabled by default.
Another alternative (a more efficient one):
PUT /index
{
"mappings": {
"doc": {
"properties": {
"querySearched": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
Then your aggregation query
GET index/_search
{
"size": 0,
"aggs": {
"result": {
"terms": {
"field": "querySearched.keyword",
"size": 10
}
}
}
}
Both solutions works but you should take this under consideration.
Hope it helps
A: What did you tried?
POST
/searches/_search
{
"size": 0,
"aggs": {
"byquerySearched": {
"terms": {
"field": "querySearched",
"size": 10
}
}
}
} | unknown | |
d6535 | train | This works
*
*one of the simplest ways to understand how I've build dictionary is get familiar with various options of data frame to_dict() formats
*I really saw a simple pattern, string is in two parts S and W delimited by a constant string. So use a re to get the two parts
*use zip to classify and make building dict keys simple
import re, io
import pandas as pd
import numpy as np
inp = """15.2' 4.3' 16.9' 4.0', GVW kips= 70.6, 9.5, 14.5, 14.1, 15.8, 16.7
3.2' 10.0' , GVW kips= 30.2, 9.5, 11.3, 12.0"""
# remove unwanted spaces and quotes
inp = inp.replace("'","").replace(",","")
d = {r:{f"{k}{c+1}":vv
# tokenise into S & W with "GVW kips=" being delimter
for k,v in zip(["S","W"], re.findall("^([\d. ]*)GVW kips= ([\d. ]*)$", s)[0])
# use re.split so multiple spaces are treated as one
for c, vv in enumerate(re.split("[ ]+", str(v)))
}
for r, s in enumerate(inp.split("\n"))}
pd.DataFrame(d).T.replace({"":np.nan})
output
S1 S2 S3 S4 S5 W1 W2 W3 W4 W5 W6
15.2 4.3 16.9 4.0 NaN 70.6 9.5 14.5 14.1 15.8 16.7
3.2 10.0 NaN NaN NaN 30.2 9.5 11.3 12.0 NaN NaN
A: Add NaN to satisfy the required number of columns. It is converted into a data frame after completing a million rows in a loop process. This method will be faster and more efficient.
s = 5
for i in range(s - len(my_list[0])):
my_list[0].append(np.NaN)
w = 6
for i in range(w - len(my_list[1])):
my_list[1].append(np.NaN)
new = pd.DataFrame(index=[], columns=[])
new = pd.concat([new, pd.Series(sum(my_list,[])).to_frame().T], axis=0, ignore_index=True)
cols = ['S1','S2','S3','S4','S5','W1','W2','W3','W4','W5','W6']
new.columns = cols
new
S1 S2 S3 S4 S5 W1 W2 W3 W4 W5 W6
0 15.2 4.3 16.9 4.0 NaN 9.5 14.5 14.1 15.8 16.7 NaN | unknown | |
d6536 | train | I had the same problem. HAXM installation would never exit and had to use "force quit" in order to kill it.
Found a log message in /var/log/system.log that seemed to coincide with the installation. It was from a totally different application but the same error reoccurred each time I tried to run the HAXM installer:
... com.apple.xpc.launchd[1] (com.paloaltonetworks.authorized[284]): Service exited due to signal: Segmentation fault: 11 sent by exc handler[0]
The errors referred to a daemon called "authorized" from paloaltonetworks. Every time I tried to run the HAXM installer I would see a Segmentation Fault error logged related to the authorized daemon.
So I disabled the authorized daemon temporarily by editing the /Library/LaunchDaemons/com.paloaltonetworks.authorized.plist file and set RunAtLoad to false as well as KeepAlive to false and rebooted. Probably would have been enough to unload and reload the daemon via launchctl but whatever.
After rebooting with the authorized daemon disabled I was able to successfully install HAXM. No issues.
Then I re-enabled the authorized daemon by reverting the changes to /Library/LaunchDaemons/com.paloaltonetworks.authorized.plist and rebooted.
Palo Alto Networks Traps (authorized daemon is related to this application) tool is working and HAXM is installed. All good. Hope this helps.
A: BTW -- it traps is indeed your problem (and it was for me) you can also just disable from the command line the traps stuff if you can sudo.
$ sudo bash
# cd /Library/Application Support/PaloAltoNetworks/Traps/bin
# ./cytool runtime stop all
--- INSTALL HAXM and whatever else ---
# ./cytool runtime start all
That should do the trick without having to reboot, etc. | unknown | |
d6537 | train | The short answer to your problem, is that @variety is undefined in the fields_for @variety. The correct version of that line in /app/views/products/_variety.html.erb is
<% fields_for :variety do |variety_form| -%>
Also there's a minor nitpick in your label line.
<%= variety_form.label :variety %>
should be
<%= variety_form.label :name, "Variety" %>
I can't tell if your goal is to update multiple products and varieties at once or just update a single product's varieties. Assuming the latter, you should be using accepts_nested_attributes_for (scroll down to the Nested Attributes Example), seems like it might be an easier way to do it. Also see the github Complex-Forms-Example repository for a working demonstration.
But this doesn't seem to be the case. It looks to me that your javascript functions are adding form/removing values for product and varieties.
This will work in the controller for adding varieties, but needs a little more work for removing existing varieties. But the railscasts you linked to provides all the information you need to put that together. | unknown | |
d6538 | train | Enclose your script in $(document).ready() so your show command executes only when the document is fully loaded to limit failure.
Like:
echo "
<script>
$( document ).ready(function() {
$('#RegisterModal').modal('show')
});
</script>"; | unknown | |
d6539 | train | It's not how you do this. First correctly assign the projectKats field i.e
# You can set max_length as per your choice
projectKats = models.CharField(max_length=50)
You need to do this logic in django forms rather than django models.
So this is how you can do it.
forms.py
from django import forms
from .models import ProjektCat, Software_And_Service
def get_category_choices():
return [(obj.Option_Name,obj.Option_Name) for obj in ProjektCat.objects.values_list('Option_Name',flat=True).distinct()]
class SoftwareAndServiceForm(forms.ModelForm):
projectKats = forms.ChoiceField(choices=get_category_choices)
class Meta:
model = Software_And_Service
fields = [
'projectKats',
# whatever fields you want
] | unknown | |
d6540 | train | I use try {} finally {} for this. The finally-block runs when try is done or if you use ctrl+c, so you need to either run commands that are safe to run either way, ex. it doesn't matter if you kill a process that's already dead..
Or you could add a test to see if the last command was a success using $?, ex:
try {
Write-Host "Working"
Start-Sleep -Seconds 100
} finally {
if(-not $?) { Write-Host "Cleanup on aisle 5" }
Write-Host "Done"
}
Or create your own test (just in case the last command in try failed for some reason):
try {
$IsDone = $false
Write-Host "Working"
Start-Sleep -Seconds 100
#.....
$IsDone = $true
} finally {
if(-not $IsDone) { Write-Host "Cleanup on aisle 5" }
Write-Host "Done"
}
UPDATE: The finally block will not work for output as the pipeline is stopped on CTRL+C.
Note that pressing CTRL+C stops the pipeline. Objects that are sent to
the pipeline will not be displayed as output. Therefore, if you
include a statement to be displayed, such as "Finally block has run",
it will not be displayed after you press CTRL+C, even if the Finally
block ran.
Source: about_Try_Catch_Finally
However, if you save the output from Receive-Job to a global variable like $global:content = Receive-Job $sleepJob you can read it after the finally-block. The variable is normally created in a different local scope and lost after the finally-block. | unknown | |
d6541 | train | I use to have so many issues with docker and nginx because I didn't understand everything very well.
So here is my recommendation:
Quick fix :
Add
nginx:
restart: always
....
depends_on:
- web
Explanation :
Nginx with upstream can be useful but if the upstream doesn't exist, then nginx will never start.
And in the docker case, you have to say that web should run before nginx using depends_on parameter.
But after upgrading my stack to docker swarm, I discovered that depends_on can't be used anymore across multiple instances
for a reason I don't remember.
So I wanted to start nginx even if my web server is not running and the easy way to do it is to use variable
in nginx like this :
set $server_1 web;
proxy_pass http://$server_1:8080;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
I definitly advise you to use the last docker-compose version because you have so much better feature.
Also, using link is quite deprecated, I recommend you to create an internal network using docker network command.
It's very useful to add or remove an instance to it and you save time and maintainability.
Running multiple containers without composer
You need to use docker run command for each container
docker run -d --name web -p 9000:9000 ....
docker run -d --name nginx -p 80:80 .... | unknown | |
d6542 | train | I think using a dict would make more sense:
d = {
'prefix1' : 'success',
'prefix2' : 'success',
'prefix3' : 'success',
'prefix4' : 'success'
}
for i in range(1,5):
temp = "prefix%s" % i
print d[temp] | unknown | |
d6543 | train | First, it seems there is confusion between Linq and Linq to SQL. Not all of Linq can be translated into SQL queries. For example: Any() and All() can't be used with Linq to Sql here - they are in-memory collection functions. This means that all rows need to be fetched and then resolved afterwards.
You are also not resolving your first two queries, for example by calling ToList().
var blockedusers = DataContext.BlockedUsers.Where(bu => bu.BlockerId == userId);
var followers = DataContext.FollowUser.Where(u => u.FollowFromUserId == userId);
This leaves them to run on-demand every time that they are used (deferred query execution) - meaning they call SQL each time. Beware the IEnumerable!
You could probably get rid of all your issues by doing joins onto BlockedUser and FollowUser all in the same query. For example, use a left join to BlockedUser and eliminate those rows by testing that the blocked user is null.
A: I think, you should firstly orderby, skeep, take and then select new PostObject in return | unknown | |
d6544 | train | Moving every tag with number to own layer solved problem.
Solution is - adding of
translateZ(0) | unknown | |
d6545 | train | It's not the outside quotes that matter, it's the literal quotes in the JSON string (must be ")
ie. This is ok (but cumbersome)
double_quote = "{\"key\": \"value\"}"
You can also use triple quotes
'''{"key": "value"}'''
"""{"key": "value"}"""
The choices of quotes are there so you hardly ever need to use the ugly/cumbersome versions
A: That's because only the first example is valid JSON. JSON data have keys and values surrounded by "..." and not '...'.
There are other "rules" that you may not be expecting. There's a great list on this wikipedia page here. For example, booleans should be lowercase (true and false) and not True and False. JSON != Python.
A: JSON is a language-free format for exchanging data. Although single_quote and double_quote make no difference in Python, they're different in JSON cause a JSON object will be processed by other languages as well. | unknown | |
d6546 | train | Once the software is installed, you can start using it. However, you may encounter the following two issues the first time you attempt to run docker commands:
docker
FATA[0000] Get http: ///var/run/docker.sock/v1.18/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
And the other error is:
docker
FATA[0000] Get http: ///var/run/docker.sock/v1.18/containers/json: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
The reason is, you need to start the Docker service first. Moreover, you must run the technology as root, because Docker needs access to some rather sensitive pieces of the system, and interact with the kernel. That's how it works.
systemctl start docker
Now we can go crazy and begin using Docker.
This is the original | unknown | |
d6547 | train | service postgresql status
or
systemctl status postgresql
Must work. | unknown | |
d6548 | train | Once a image has been published on Facebook, FB will always show the cached version. if, however, you want to update the image for a new post, try running the URL to the Debug Tool to update Facebook's cache. Any existing posts that use this image won't be updated.
It's to prevent people from changing content once it's already shared - otherwise it won't represent what the user originally shared. This would be something spammers could easily abuse. | unknown | |
d6549 | train | You could use the base64 property of your encrypted object it returns a String.
In the source code of the package it says that it returns the Encrypted as a Base64 String representation.
pref.setString("key", keyz.base64);
use the same encoding while decrypting
Encrypter.decrypt64(valueFromSharedPref) | unknown | |
d6550 | train | The solution works, but it is inefficient.
You are using randperm to create a vector (array), and then use only the first element of the vector.
You can use randi to create a scalar (single element) instead:
n=5;m=10;
A=zeros(n,m);
for i=1:m
%rand_pos gets a random number in range [1, n].
rand_pos = randi([1, n]);
A(rand_pos, i)=1;
end
You can also use the following "vectorized" solution:
rand_pos_vec = randi([1, n], 1, m);
A(sub2ind([n, m], rand_pos_vec, 1:m)) = 1;
The above solution:
*
*Creates a vector of random values in range [1, n].
*Use sub2ind to convert "row index" to "matrix index".
*Place 1 in "matrix index".
A: You can do it in one line using bsxfun and randi:
A = double(bsxfun(@eq, 1:m, randi(n, n, 1)));
This compares the row vector [1 2 ... m] with an nΓ1 random vector of values from 1 to n. The comparison is done element-wise with singleton expansion. For each row, exactly one of the values of [1 2 ... m] equals that in the random vector. | unknown | |
d6551 | train | here what official doc says :
Python module to facilitate downloading and deploying WebDriver
binaries. The classes in this module can be used to automatically
search for and download the latest version (or a specific version) of
a WebDriver binary and then extract it and place it by copying or
symlinking it to the location where Selenium or other tools should be
able to find it then.
Reference link
Read more about chromedriver-autoinstaller
Installation
pip install chromedriver-autoinstaller
Usage
Just type import chromedriver_autoinstaller in the module you want to use chromedriver.
Example
from selenium import webdriver
import chromedriver_autoinstaller
chromedriver_autoinstaller.install() # Check if the current version of chromedriver exists
# and if it doesn't exist, download it automatically,
# then add chromedriver to path
driver = webdriver.Chrome()
driver.get("http://www.python.org")
assert "Python" in driver.title
Reference Link here | unknown | |
d6552 | train | You can use data.table::rleid
have %>% mutate(group = data.table::rleid(drug))
# A tibble: 12 x 4
patinet date drug group
<dbl> <date> <chr> <int>
1 1 2022-03-16 a 1
2 1 2022-03-17 a 1
3 1 2022-03-18 a 1
4 1 2022-03-19 b 2
5 1 2022-03-20 b 2
6 1 2022-03-21 b 2
7 1 2022-03-22 c 3
8 1 2022-03-23 c 3
9 1 2022-03-24 c 3
10 1 2022-03-25 a 4
11 1 2022-03-26 a 4
12 1 2022-03-27 a 4 | unknown | |
d6553 | train | Ole Begemann has done something like this. You can find the project here on GitHub.
Ole also writes a superb blog summary of some of the best developer links and tutorials around. Well worth subscribing to!
A: Look at the UIView documentation for animation types available. Here is what I'd use:
UIViewAnimationOptions animation;
if (pageNumberLower) {
animation = UIViewAnimationOptionTransitionCurlDown;
} else {
animation = UIViewAnimationOptionTransitionCurlUp;
}
[UIView transitionWithView:myChangingView
duration:0.5
options:animation
animations:^{ CHANGE PAGE HERE }
completion:NULL]; | unknown | |
d6554 | train | Simple, use Math.Ceiling:
var wholeNumber = (int)Math.Ceiling(fractionalNumber);
A: Something like this?
int myInt = (int)Math.Ceiling(myDecimal);
A: Before saying it does not work, you have to check that ALL VALUES in the operation are double type.
Here is an example in C#:
int speed= Convert.ToInt32(Math.Ceiling((double)distance/ (double)time));
A: Math.Ceiling not working for me, I use this code and this work :)
int MyRoundedNumber= (int) MyDecimalNumber;
if (Convert.ToInt32(MyDecimalNumber.ToString().Split('.')[1]) != 0)
MyRoundedNumber++;
and if you want to round negative number to down for example round -1.1 to -2 use this
int MyRoundedNumber= (int) MyDecimalNumber;
if (Convert.ToInt32(MyDecimalNumber.ToString().Split('.')[1]) != 0)
if(MyRoundedNumber>=0)
MyRoundedNumber++;
else
MyRoundedNumber--;
A: var d = 1.5m;
var i = (int)Math.Ceiling(d);
Console.Write(i); | unknown | |
d6555 | train | Based off of the question here: How to alter title bar height for access form? and here: http://www.pcreview.co.uk/threads/how-can-you-change-an-access-datasheet-column-header-height-or-wra.3309187/, No, Access doesn't have this capability. | unknown | |
d6556 | train | Any returns a bool while Where returns an IQueryable. Being lazy, one would expect Any to terminate as soon as one satisfying element is found (returning true) while Where will search them all.
If you want to select a single customer, Single is what you are looking for.
A: Any() returns a bool. I.e. are there any elements matching the condition. Use Any() if you just want to know if you have elements to work with. E.g. prefer Any() over Count() == 0 for instance as the latter will possible enumerate the entire sequence to find out if it is empty or not.
Where() returns a sequence of the elements matching the condition.
A: Any<> checks whether any items satisfy the criterion, i.e. returns bool, meaning that it only has to find the first item, which can be very fast. Whereas Where<> enumerates all the items that satisfy the condition, meaning that it has to iterate the whole collection.
A: Check the documentation again:
*
*Any<> returns a bool indicating whether at least one item meets the criteria
*Where<> returns an IEnumerable containing the items that meet the criteria
There may be a performance difference in that Any stops as soon as it can determine the result (when it finds a matching item), while Where will need to always loop over all items before returning the result. So if you only need to check whether there are any matching items, Any will be the method for the job.
A: Any tests the lambda/predicate and returns true/false
Where returns the set of objects for which lambda/predicate holds true as IQueryable | unknown | |
d6557 | train | Do you mean something like this?
z_list_generator <- function(k) lapply(1:k, function(i) runif(5 * i))
set.seed(2018) # Fixed random seed for reproducibility
z_list_generator(2)
#[[1]]
#[1] 0.33615347 0.46372327 0.06058539 0.19743361 0.47431419
#
#[[2]]
# [1] 0.3010486 0.6067589 0.1300121 0.9586547 0.5468495 0.3956160 0.6645386
# [8] 0.9821123 0.6782154 0.8060278
length(z_list_generator(2))
#[1] 2
A: Your z_list_generator is strange.
1) You do not initialise mybiglist in your function code. It probably modifies some global variable.
2) You assign mybiglist elements with another list (of lenght 1), which first element contains a sample from a uniform distrubution. Better assign a, not tmp there. | unknown | |
d6558 | train | You want to find the schema name of the sub grid. You can do this by opening the form editor and then double clicking on the specific sub-grid.
To hide the sub-grid, you want to make sure to use supported JavaScript. To hide:
Xrm.Page.ui.controls.get('ProjectRisks').setVisible(false);
To show:
Xrm.Page.ui.controls.get('ProjectRisks').setVisible(false);
When testing, these worked fine for me. | unknown | |
d6559 | train | I had the same problem and solved it by doing:
export SYMFONY_ENV=prod
A: To clarify, running composer update really solve the problem.
A: It may be a bit out of scope, but I would like to add to Pogus's answer that if you are using Ansible for running composer, you have to provide this env variable like this:
- name: "Install your app dependencies"
composer:
command: install
no_dev: yes
optimize_autoloader: yes
working_dir: "/your/app/dir"
environment: # <---- here
SYMFONY_ENV: prod # <---/
...or in a similar way, read the Ansible environment variables docs for details.
Setting it places like /etc/profile.d/set-symfony-env-to-prod.sh scripts will be used by programs running on your server, but NOT by Ansible. | unknown | |
d6560 | train | createConnection() is the old way to do it. Since typeorm 0.3.x you should use the DataSource object with DataSource.initialize(). | unknown | |
d6561 | train | A closure is simply a function which holds its lexical environment and doesn't let it go until it itself dies.
Think of a closure as Uncle Scrooge:
Uncle Scrooge is a miser. He will never let go of his money.
Similarly a closure is also a miser. It will not let go of its variables until it dies itself.
For example:
function getCounter() {
var count = 0;
return function counter() {
return ++count;
};
}
var counter = getCounter();
See that function counter? The one returned by the getCounter function? That function is a miser. It will not let go of the count variable even though the count variable belongs to the getCounter function call and that function call has ended. Hence we call counter a closure.
See every function call may create variables. For example a call to the getCounter function creates a variable count. Now this variable count usually dies when the getCounter function ends.
However the counter function (which can access the count variable) doesn't allow it to die when the call to getCounter ends. This is because the counter function needs count. Hence it will only allow count to die after it dies itself.
Now the really interesting thing to notice here is that counter is born inside the call to getCounter. Hence even counter should die when the call to getCounter ends - but it doesn't. It lives on even after the call to getCounter ends because it escapes the scope (lifetime) of getCounter.
There are many ways in which counter can escape the scope of getCounter. The most common way is for getCounter to simply return counter. However there are many more ways. For example:
var counter;
function setCounter() {
var count = 0;
counter = function counter() {
return ++count;
};
}
setCounter();
Here the sister function of getCounter (which is aptly called setCounter) assigns a new counter function to the global counter variable. Hence the inner counter function escapes the scope of setCounter to become a closure.
Actually in JavaScript every function is a closure. However we don't realize this until we deal with functions which escape the scope of a parent function and keep some variable belonging to the parent function alive even after the call to the parent function ends.
For more information read this answer: https://stackoverflow.com/a/12931785/783743
A: Returning the function changes nothing, what's important is creating it and calling it. That makes the closure, that is a link from the internal function to the scope where it was created (you can see it, in practice, as a pointer. It has the same effect of preventing the garbaging of the outer scope, for example).
A: By definition of closure, the link from the function to its containing scope is enough. So basically creating the function makes it a closure, since that is where the link is created in JavaScript :-)
Yet, for utilizing this feature we do call the function from a different scope than what it was defined in - that's what the term "use a closure" in practise refers to. This can both be a lower or a higher scope - and the function does not necessarily need to be returned from the function where it was defined in.
Some examples:
var x = null;
function a() {
var i = "from a";
function b() {
alert(i); // reference to variable from a's scope
}
function c() {
var i = "c";
// use from lower scope
b(); // "from a" - not "c"
}
c();
// export by argument passing
[0].forEach(b); // "from a";
// export by assigning to variable in higher scope
x = b;
// export by returning
return b;
}
var y = a();
x(); // "from a"
y(); // "from a"
A: The actual closure is a container for variables, so that a function can use variables from the scope where it is created.
Returning a function is one way of using it in a different scope from where it is created, but a more common use is when it's a callback from an asynchronous call.
Any situation where a function uses variables from one scope, and the function is used in a different scope uses a closure. Example:
var globalF; // a global variable
function x() { // just to have a local scope
var local; // a local variable in the scope
var f = function(){
alert(local); // use the variable from the scope
};
globalF = f; // copy a reference to the function to the global variable
}
x(); // create the function
globalF(); // call the function
(This is only a demonstration of a closure, having a function set a global variable which is then used is not a good way to write actual code.)
A: a collection of explanations of closure below. to me, the one from "tiger book" satisfies me most...metaphoric ones also help a lot, but only after encounterred this one...
*
*closure: in set theory, a closure is a (smallest) set, on which some operations yields results also belongs to the set, so it's sort of "smallest closed society under certain operations".
a) sicp: in abstract algebra, where a set of elements is said to be closed under an operation if applying the operation to elements in the set produces an element that is again an element of the set. The Lisp community also (unfortunately) uses the word "closure" to describe a totally unrelated concept: a closure is an implementation technique for representing procedures with free variables.
b) wiki: a closure is a first class function which captures the lexical bindings of free variables in its defining environment. Once it has captured the lexical bindings the function becomes a closure because it "closes over" those variables.β
c) tiger book: a data structure on heap (instead of on stack) that contains both function pointer (MC) and environment pointer (EP), representing a function variable;
d) on lisp: a combination of a function and a set of variable bindings is called a closure; closures are functions with local state;
e) google i/o video: similar to a instance of a class, in which the data (instance obj) encapsulates code (vtab), where in case of closure, the code (function variable) encapsulates data.
f) the encapsulated data is private to the function variable, implying closure can be used for data hiding.
g) closure in non-functional programming languages: callback with cookie in C is a similar construct, also the glib "closure": a glib closure is a data structure encapsulating similar things: a signal callback pointer, a cookie the private data, and a destructor of the closure (as there is no GC in C).
h) tiger book: "higher-order function" and "nested function scope" together require a solution to the case that a dad function returns a kid function which refers to variables in the scope of its dad implying that even dad returns the variables in its scope cannot be "popup" from the stack...the solution is to allocate closures in heap.
i) Greg Michaelson ($10.15): (in lisp implementation), closure is a way to identify the relationship betw free variables and lexical bound variables, when it's necessary (as often needed) to return a function value with free variables frozen to values from the defining scope.
j) histroy and etymology: Peter J. Landin defined the term closure in 1964 as having an environment part and a control part as used by his SECD machine for evaluating expressions. Joel Moses credits Landin with introducing the term closure to refer to a lambda expression whose open bindings (free variables) have been closed by (or bound in) the lexical environment, resulting in a closed expression, or closure. This usage was subsequently adopted by Sussman and Steele when they defined Scheme in 1975, and became widespread. | unknown | |
d6562 | train | EDIT: I'm sorry Joe, it looks like I attached your fiddle to the link other than my updated copy. Please check the link out again.
I've created a JSfiddle using yours for a working example.
I modified your code to make it easier by adding an attribute on your debit input of data-action="sumDebit" and added in this snippet.
$('body').on('change', '[data-action="sumDebit"]', function() { //Attach an event to body that binds to all tags that has the [data-action="sumDebit"] attribute. This will make sure all over dynamically added rows will have the trigger without us having to readd after ever new row.
var total = 0;
$('[data-action="sumDebit"]').each(function(i,e) { //Get all tags with [data-action="sumDebit"]
var val = parseFloat(e.value); //Get int value from string
if(!isNaN(val)) //Make sure input was parsable. If not, result come back as NaN
total += val;
});
$('#totaldbt').val(total); //Update value to total
});
A: I have fixed your code. please check it.
var ctr = 1;
var FieldCount = 1;
$('#fst_row').on('click', '.button-add', function() {
ctr++;
var cashacc_code = 'cashacc_code' + ctr;
var cashacc = 'cashacc' + ctr;
var cash_narrat = 'cash_narrat' + ctr;
var cashdeb = 'cashdeb' + ctr;
var cashcredit = 'cashcredit' + ctr;
var newTr = '<tr class="jsrow"><td><input type="number" class=' + "joe" + ' id=' + cashacc_code + ' name="cashaccCode" onchange="calSum()" keyup="calSum()" placeholder="NNNN" /></td><td><select class="form-control" id="cashacc" ><option value="">TDS A/C Name1</option><option value="1">Joe</option><option value="2">Joe</option><option value="3">Joe</option></select></td><td><input type="text" class=' + "joe" + ' id=' + cash_narrat + ' placeholder="Enter Here" /></td><td><input type="number" class=' + "joe" + ' id=' + cashdeb + ' ' + FieldCount + ' placeholder="NNNN" /></td><td><input type="number" class=' + "joe" + ' id=' + cashcredit + ' /></td><td style="width: 4%"><img src="./img/plus.svg" class="insrt-icon button-add"><img src="./img/delete.svg" class="dlt-icon"></td></tr>';
$('#cashTable').append(newTr);
$(document).on('click', '.dlt-icon', function() {
$(this).parents('tr.jsrow').first().remove();
});
});
/* second row */
var ctr = 1;
var FieldCount = 1;
$('#sndRow').on('click', '.button-add', function() {
ctr++;
var rowNum = 'rowNum' + ctr;
var cashacc_nme = 'cashacc_nme' + ctr;
var acc_narrat = 'acc_narrat' + ctr;
var accdeb = 'accdeb' + ctr;
var accCredit = 'accCredit' + ctr;
var newTr = '<tr class="jsrow"><td><input type="number" class=' + "joe" + ' id=' + rowNum + ' name="cashaccCode" onchange="calSum()" keyup="calSum()" placeholder="NNNN" /></td><td><select class="form-control" id="cashacc_nme" ><option value="">Account Name 1</option><option value="1">Plumz</option><option value="2">Plumz</option><option value="3">Plumz</option></select></td><td><input type="text" class=' + "joe" + ' id=' + acc_narrat + ' placeholder="Enter Here" /></td><td><input type="number" class=' + "joe debClass" + ' id=' + accdeb + ' ' + FieldCount + ' placeholder="NNNN" /></td><td><input type="number" class=' + "joe" + ' id=' + accCredit + ' /></td><td style="width: 4%"><img src="./img/plus.svg" class="insrt-icon button-add"><img src="./img/delete.svg" class="dlt-icon"></td></tr>';
$('#cashTable').append(newTr);
$(document).on('click', '.dlt-icon', function() {
$(this).parents('tr.jsrow').first().remove();
});
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<table id="cashTable" class="table table-bordered table-striped" required>
<tbody>
<tr id="fst_row">
///First row
<td>
<input type="number" onchange="calSum()" keyup="calSum()" id="cashacc_code" placeholder="NNNN" class="form-control" name="cashaccCode" />
</td>
<td>
<select class="form-control selectsch_items" name="cashacc" id="cashacc">
<option value="Choose and items">Choose and items</option>
<option value="1">TDS A/c Name 1</option>
<option value="2">TDS A/c Name 2</option>
</select>
</td>
<td>
<input type="text" id="cash_narrat" placeholder="Enter here" class="form-control" pattern="[a-zA-Z0-9-_.]{1,20}" name="cash_narrat" data-toggle="modal" data-target="#narratModal" />
</td>
<td>
<input type="number" id="cashdeb" placeholder="Debit Amount" class="form-control" name="cashdeb" readonly />
</td>
<td>
<input type="text" id="cashcredit" class="form-control" name="cashcredit" readonly />
</td>
<td class="tblBtn" style="width: 4%">
<a href="#"><img src="./img/plus.svg" class="insrt-icon button-add"></a>
<a href="#"><img src="./img/delete.svg" class="dlt-icon dlt-icon"></a>
</td>
</tr>
//// second row
<tr id="sndRow">
<td>
<input type="number" onchange="calSum()" keyup="calSum()" class="form-control" id="rowNum" name="cashaccCode" placeholder="NNNN" />
</td>
<td>
<select class="form-control selectsch_items" name="cashacc_nme" id="cashacc_nme">
<option value="#">Choose and items</option>
<option value="1">Joe</option>
<option value="2">Joe2</option>
</select>
</td>
<td>
<input type="text" class="form-control" id="acc_narrat" placeholder="Enter here" name="acc_narrat" data-toggle="modal" data-target="#accnarratModal" />
</td>
<td>
<input type="number" class="form-control debClass" id="accdeb" placeholder="NNNNNN" name="accdeb" />
</td>
<td>
<input type="number" id="accCredit" class="form-control" name="accCredit" readonly />
</td>
<td style="width: 4%">
<a href="#"><img src="./img/plus.svg" id="debsum" class="insrt-icon button-add"></a>
<a href="#"><img src="./img/delete.svg" class="dlt-icon"></a>
</td>
</tr>
</tbody>
</table>
<div class="row">
<div class="col-6">
<div class="cashTotal">
<p class="tableTotal">Total:</p>
</div>
</div>
<div class="col-6">
<input type="number" class="totaldeb" id="totaldbt" name="totaldbt" readonly>
</div>
</div>
<script>
function calSum(){
var calvalue = 0;
$("input[name*='cashaccCode']").each( function( key, item ) {
//alert( key + ": " + item.value );
calvalue = calvalue + parseFloat(item.value);
});
$("#totaldbt").val(calvalue);
$
}
</script> | unknown | |
d6563 | train | Taken from Michael Bleighs comment:
"The Firebase CLI does support GOOGLE_APPLICATION_CREDENTIALS, but you don't need to "log in" with them. If the environment variable is pointing to a valid service account you should be able to just use CLI commands as if you are logged in. You do need to be logged out for GAC to work correctly. Run the command with --debug if you're getting errors while trying to do so."
I can confirm I have this working. Note you might need to run firebase use <project-id> for it to work correctly.
A: You can't sign in to your Firebase project with a service account. You will need to use the proper user account of a collaborator on the project to sign in with Firebase tools.
Even when using the CI integration, the documentation says to:
*
*Start the signin process by running the following command:
firebase login:ci
*Visit the URL provided, then sign in using a Google account. | unknown | |
d6564 | train | You can use the following sequence that works perfectly fine for me.
@echo off
::::::::::::::::::::::::::::::::::::::::::::
:checkPrivileges
NET FILE 1>NUL 2>NUL
if '%errorlevel%' == '0' (goto gotPrivileges) else (goto getPrivileges)
:getPrivileges
echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%\getadmin.vbs"
echo For Each strArg in WScript.Arguments >> "%temp%\getadmin.vbs"
echo If strArg = WScript.Arguments.Item^(0^) Then d = Left^(strArg, InStrRev^(strArg,"\"^) - 1^) >> "%temp%\getadmin.vbs"
echo args = args ^& " " ^& strArg >> "%temp%\getadmin.vbs"
echo Next >> "%temp%\getadmin.vbs"
echo UAC.ShellExecute "cmd.exe", ^("/c start /D """ ^& d ^& """ /B" ^& args ^& " ^& exit"^), , "runas", 4 >> "%temp%\getadmin.vbs"
cscript "%temp%\getadmin.vbs" ""%~s0"" %*
del /q "%temp%\getadmin.vbs"
exit /b
:gotPrivileges
:: Your code here
This VBS script uses perfectly the UAC.ShellExecute, you can point by yourself the changes. You used a lot of tricks and the VBS code doesn't even call UAC in the correct way, which is why it gives permission denied. I believe that the use of cacls and this script was very useful in the past, used by Malware to obtain administrative access without the user's permission, there are many programs that use icacls for this, I will not even point out, but that may have made using icacls difficult.
You can also use Powershell previously to do it.
Powershell -Command "Start-Process cmd -Verb RunAs -ArgumentList '/c C:\YourPath\yourprogram.bat'"
Both of the scripts works perfectly fine for me. I can see a lot of errors in your .vbs, but there is no point in pointing them out because the script is not yours.
I recommend that you use the Powershell, it is much more useful and simple, to integrate:
NET FILE >NUL 1>NUL 2>NUL
if %errorlevel% equ 0 (goto gotprivileges) else (start Powershell -Command "Start-Process cmd -Verb RunAs -ArgumentList '/c C:\YourPath\yourprogram.bat'" & exit)
Hope this helps,
K. | unknown | |
d6565 | train | Your asking quite a number of things.
*
*To get the line on top of the bar, it seems we have to first draw the bars and afterwards the line. Drawing the line last shrinks the xlims, so we have to apply them explicitely.
*Moving the legend is more complicated. Normally you just do ax1.legend(loc='upper left'), but in our case with two plots this seems to always draw a second legend with the last drawn plot as only entry.
*There is a function set_bbox_to_anchor with little documentation. It defines some box (x, y, width, height), but there is also a seemingly inaccessible loc parameter that controls how the box and the position relate. "The default for loc is loc="best" which gives unpredictable results when the bbox_to_anchor argument is used." Some experimentation might be needed. The best solution, is to guard the
*Setting the text is simple. Just iterate over the y positions. Place at x position 0,1,2,.. and center horizontally (vertically at bottom).
*To remove the spines, it seems there are two axes over each other (what probably also causes the zorder not to work as desired). You'll want to hide the spines of both of them.
*To remove the ticks, use ax1.axes.yaxis.set_ticks([]).
*To switch the ax2 ticks to the left use ax2.yaxis.tick_left().
import pandas as pd
from matplotlib import pyplot as plt
data = {'Year': {0: '2016', 1: '2017', 2: '2018', 3: '2019', 4: '2020'},
'Some': {0: 9, 1: 13, 2: 21, 3: 18, 4: 28},
'All': {0: 157, 1: 189, 2: 216, 3: 190, 4: 284},
'Ratio': {0: 0.05732484076433121,
1: 0.06878306878306878,
2: 0.09722222222222222,
3: 0.09473684210526316,
4: 0.09859154929577464}}
df = pd.DataFrame(data)
ax1 = df.plot(x="Year", y="All",
kind="bar",
)
for i, a in df.All.items():
ax1.text(i, a, str(a), ha='center', va='bottom', fontsize=18)
xlims = ax1.get_xlim()
ax2 = df.plot(x="Year", y="Ratio",
kind="line", linestyle='-', marker='o', color="orange", ax=ax1, secondary_y=True,
figsize=((24, 12))
)
ax2.set_xlim(xlims) # needed because the line plot shortens the xlims
# ax1.get_legend().set_bbox_to_anchor((0.03, 0.9, 0.1, 0.1)) # unpredictable behavior when loc='best'
# ax1.legend(loc='upper left') # in our case, this would create a second legend
ax1.get_legend().remove() # remove badly placed legend
handles1, labels1 = ax1.get_legend_handles_labels()
handles2, labels2 = ax2.get_legend_handles_labels()
ax1.legend(handles=handles1 + handles2, # create a new legend
labels=labels1 + labels2,
loc='upper left')
# ax1.yaxis.tick_right() # place the yticks for ax1 at the right
ax2.yaxis.tick_left() # place the yticks for ax2 at the left
ax2.set_ylabel('Ratio')
ax2.yaxis.set_label_position('left')
ax1.axes.yaxis.set_ticks([]) # remove ticks
for ax in (ax1, ax2):
for where in ('top', 'right'):
ax.spines[where].set_visible(False)
plt.show() | unknown | |
d6566 | train | It isn't obligatory for app to support new futures of iOS6 like so-called 'GiraffeMode' of iPhone5. | unknown | |
d6567 | train | aggregate
db.collection.aggregate({
"$unwind": "$data"
},
{
"$match": {
"data.id": "0001"
}
},
{
"$project": {
"_id": "$data.id",
"type": "$data.type",
"name": "$data.name",
"ppu": "$data.ppu"
}
})
mongoplayground | unknown | |
d6568 | train | If you are feeling brave, try something like
ls *.sac | fgrep -v -f gd.list | xargs echo rm
Note that I've put an echo in that xargs, just to make sure no one has a cut and paste accident.
Note also the limitations of this approach mentioned in the comments. As I said, if you are feeling brave...
A: The rm command is commented out so that you can check and verify that it's working as needed. Then just un-comment that line.
The check directory section will ensure you don't accidentally run the script from the wrong directory and clobber the wrong files.
You can remove the echo deleting line to run silently.
#!/bin/bash
cd /home/me/myfolder2tocleanup/
# Exit if the directory isn't found.
if (($?>0)); then
echo "Can't find work dir... exiting"
exit
fi
for i in *; do
if ! grep -qxFe "$i" filelist.txt; then
echo "Deleting: $i"
# the next line is commented out. Test it. Then uncomment to removed the files
# rm "$i"
fi
done
You can find the answer here https://askubuntu.com/questions/830776/remove-file-but-exclude-all-files-in-a-list by L. D. James
A: there are a few alternatives.
I'd prefer to see find -Z as it more clearly demarcates the file names:
find . -maxdepth 1 -name '*.sac' -print0 | grep -x -z -Z -f gd.list | xargs -0 echo rm
Again, test this first. Perhaps sort the output and make sure it is unique versus the original file.
For a smaller list of filenames I would recommend just using find with -and -not -name and -delete, but with a larger list that can be tricky.
You could tag the files you want to keep as read-only, then delete the wildcard with the appropriate setting in rm or find to skip read-only files. That assumes you own the read-only flag. You could tag the files as executable, and use find, if the read-only flag is not for you.
Another option would be to move the matching files to a temp folder, delete the wildcard, then move the files you want to keep back. That is assuming you can afford for the files to disappear temporarily.
To make them disappear for a shorter time, move the kept files out to a temp directory, move the original directory out, move the temp directory in, then delete the movced out directory. | unknown | |
d6569 | train | I don't think that's possible without custom implementation like this: https://github.com/jenssegers/laravel-mongodb
You can check these too:
*
*https://github.com/Indatus/trucker
*https://github.com/CristalTeam/php-api-wrapper
I'm not sure if anything of these fits to your case but it's a good start point. | unknown | |
d6570 | train | When you have before_validation declarations and if they return false then you'll get a Validation failed (ActiveRecord::RecordInvalid) message with an empty error message (if there are no other errors).
Note that before_validation callbacks must not return false (nil is okay) and this can happen by accident, e.g., if you are assigning false to a boolean attribute in the last line inside that callback method. Explicitly write return true in your callback methods to make this work (or just true at the end if your callback is a block (as noted by Jesse Wolgamott in the comments)).
UPDATE: This will no longer be an issue starting Rails 5.0, as return false will no longer halt the callback chain (throw :abort will now halt the callback chain).
UPDATE: You might also receive ActiveRecord::RecordNotSaved: Failed to save the record if a callback returns false.
A: I think the problem lies in the controller code. The order variable is set before the line item is destroyed, and is not aware it's been destroyed afterwards. This code should really be in the model:
# line_item.rb
after_destroy :update_totals!
delegate :update_totals, :to=> :order
And the controller should just destroy the line item.
A: Regarding 1. Why is my activerecord errors model not saying what the validation error is?, see if you have the gem i18n installed. If you do, try uninstalling or an earlier version of the gem i18n.
gem uninstall i18n
A: It looks to me like you are using Ruby 1.8.7. Have you tried running your app using Ruby 1.9.3?
A: When you create other register in a before_validation method, if it fails, the error will be thrown by the 'father' class, so it won't show error, just <ActiveRecord::RecordInvalid: Validation failed: > I noticed that when I got an error in my 'child' record using byebug inside before validation method
A: Throwing a reply in here as it took a bit for us to track this down. We were upgrading to Rails 5.2 and suddenly started getting this exception.
It was due to us overriding destroyed? on the model (we were soft deleting items). | unknown | |
d6571 | train | I would do a multibinding http://www.scottlogic.co.uk/blog/colin/2010/05/silverlight-multibinding-solution-for-silverlight-4/ for XConverter and YConverter (with the ConvertBack method filled in).
I would have each XConverter and YConverter bound to both textboxes. Then in XConverter replace only before the ; and YConverter replace after the ;
A: I would probably utilize a decorator class for class X.
Internally, the decorator class would break the myVar into separate properties, each of which can be bound to the screen. Then you would have a ToX() method to get the X object back out of the decorator. That method would do the easy construction of the myVar property. Example is something like this:
public class X
{
public string myVar { get; set; }
}
public class XDecorator
{
public XDecorator(X x)
{
var pieces = x.Split(';');
XPart = pieces[0];
YPart = pieces[1];
}
public string XPart { get; set; }
public string YPart { get; set; }
public X ToX()
{
return new X { myVar = string.Format("{0};{1}", XPart, YPart) };
}
}
Then, when you're setting this property on your ViewModel, you first wrap it in this decorator class. And when you go to save it, you call the ToX() method to get the X object out - the thing you really want.
Hope that makes sense. | unknown | |
d6572 | train | location ~ ^(.*\.txt)$ {
alias /home/laike9m/$1;
}
Solved it. | unknown | |
d6573 | train | i think you need to read more about how functions work.
once you return anything, the function will end.
you can not itterate over anything and return multiple values within a function.
try saving them locally in the function, and then at the end returning a list/dict/tuple with all the results.
for instance... i think your code could be written:
def _tot_get_deposit(self, cr, uid, ids, name, arg, context=None):
res = {}
results = []
for deposit in self.browse(cr, uid, ids, context=context):
sum = 0.0
sum = A - B
results.append( sum )
return (res,results)
this will create a list of "sum" which is then added to your dict "res" and then returned. together as a tuple. | unknown | |
d6574 | train | Unlike other SQL dialects, you cannot use just the word JOIN to specify an inner join in Access (JET) SQL. You have to use both keywords: a INNER JOIN b.
Interestingly enough, I just tested it and JET does allow for LEFT JOIN and RIGHT JOIN, without the OUTER keyword.
Change your query to read FROM AP a INNER JOIN Vendor b and it should work. | unknown | |
d6575 | train | Create controlanum as a table-valued function instead of a view
IF EXISTS (SELECT * FROM dbo.sysobjects WHERE ID = OBJECT_ID('[dbo].[controlanum]') AND XTYPE IN ('FN', 'IF', 'TF'))
DROP FUNCTION [dbo].[controlanum]
GO
CREATE FUNCTION [dbo].[controlanum] (
@emp int
,@mes int
,@ano int
)
RETURNS @numeros TABLE (numero int)
AS
BEGIN
INSERT @numeros
SELECT numero
FROM ctrc WITH (NOLOCK)
WHERE EMITENTE = @emp
AND MONTH (EMISSAODATA ) = @mes
AND YEAR (EMISSAODATA) = @ano
RETURN
END
GO
Everywhere you reference controlanum, pass it your 3 filter values. For instance:
--...other code here...
SELECT @min = MIN(numero)
FROM dbo.controlanum(@emp, @mes, @ano)
SELECT @max = MAX(numero)
FROM dbo.controlanum(@emp, @mes, @ano)
--...other code here...
SELECT tempordernumber
FROM #TempTable A
LEFT JOIN dbo.controlanum(@emp, @mes, @ano) O
ON A.TempOrderNumber <> O.numero
WHERE O.numero IS NULL | unknown | |
d6576 | train | This is likely too late to be any help to this poster, but JSON is JavaScript Object Notation, which means the language for which the quote needs to be escaped is JavaScript, rather than VB.Net. To escape a single or a double quote in JavaScript, you can replace it with a backslash followed by the single or double quote. That is a little tricky to write in a Regex -- and this becomes a much nastier problem in C#.Net because C# also uses backslash for escaping -- but this example should get you started:
Dim s AS String
s = "This is a ""quote"" test."
s = Regex.Replace(s,"[""]","\\""")
' After VB.Net quote-escaping and replacing the string-delimiting quote marks with the slashes more familiar to JavaScript users, that reduces to /["]/ (with an implicit global replace directive) for the pattern, and a string consisting of a backslash followed by a double-quote, for the replacement value. | unknown | |
d6577 | train | Link in this case can mean several things but we can unpack all the possible scenarios:
*
*A shortcut (.lnk file). These files must have the .lnk extension because the file extension is how Windows decides which handler to invoke when you double-click/execute the file. If you create a shortcut to a jpg file the real name can be link.jpg.lnk but the user will see the name as link.jpg in Explorer because .lnk is a special extension that is hidden.
*A symbolic link (symlink). These are links on the filesystem level and can have any extension. mklink "c:\mylink.jpg" "c:\file.jpg"
*A hardlink. This is another name for the same file (alias), it does not create a shortcut. mklink /H "c:\anothername.jpg" "c:\file.jpg" | unknown | |
d6578 | train | netstat -a on Windows. The -b option will also give you the listening executable name, but it requires elevation (i. e. admin rights).
With -n it will work much faster, but the addresses and the protocols will remain numeric.
All options can be specified with - or with / (e. g. /a, /b, /n, etc). netstat /? will dump all command line options. The /? option works for most other Windows commands too.
On Linux, the options of netstat are different. | unknown | |
d6579 | train | The update statement is decrementing the value of SeatsAvailable on the Theatre table for tid=2 (the AND CINEMA_SESSION.sid = 2 is immaterial - you are updating the row on the THEATER table). Since tid is the primary key for theatre, there is only one record with that value, and that record is updated.
Your select for session sid=3 joins to theatre by the tid column, and it's matching the row where tid=2, which is why you're seeing the new value - it is matching the record that you just updated.
This may make more sense if you just look at the contents of the THEATER table (without any joins).
If you are trying to indicate that a seat has been booked for a particular session, then I would suggest you need to be updating a field on a different table. I'll leave that to you to work out. Hope this helps. | unknown | |
d6580 | train | You can use BitConverter.GetBytes to get the bytes comprising an Int32. There will be 4 bytes in the result, however, not 2.
A: Is it an int16?
Int16 i = 7;
byte[] ba = BitConverter.GetBytes(i);
This will only have two bytes in it.
A: Another way to do it, although not as slick as other methods:
Int32 i = 38633;
byte b0 = (byte)(i % 256);
byte b1 = (byte)(i / 256);
A: Assuming you just want the low bytes:
byte b0 = (byte)i,
b1 = (byte)(i>>8);
However, since 'int' is 'Int32' that leaves 2 more bytes uncaptured.
A: Option 1:
byte[] buffer = BitConverter.GetBytes(number);
Option 2:
byte[] buffer = new byte[2];
buffer[0] = (byte) number;
buffer[1] = (byte)(number >> 8);
I prefer option 1! | unknown | |
d6581 | train | Maybe organize it like this, so that the color is easily changed:
package Trial;
import javax.swing.*;
import java.awt.*;
public class ColorRed extends JApplet {
private GradientPaint black;
private GradientPaint yellowOrange;
public void init() {
setBlack(new GradientPaint(50,20,Color.BLACK,50,50,Color.BLACK));
setYellowOrange(GradientPaint(50,20,Color.YELLOW,50,50,Color.RED));
}
public setBlack(GradientPaint black) {
this.black = black;
}
public setYellowOrange(GradientPaint yellowOrange) {
this.yellowOrange = yellowOrange;
}
public void paint(Graphics g){
super.paint(g);
Graphics2D g2 = (Graphics2D)g;
blackDiamond(g2,black);
redDiamond(g2,yellowOrange);
}
public void blackDiamond(Graphics2D g2,GradientPaint gradientPaint){
int a [] = {100,50,100,150,100};
int b [] = {10,60,110,60,10};
g2.setPaint(gradientPaint);
fillPolygon(a,b,5,g2);
}
public void redDiamond(Graphics2D g2,GradientPaint gradientPaint){
int a2 [] = {100,60,100,140,100};
int b2 [] = {20,60,100,60,20};
g2.setPaint(gradientPaint);
fillPolygon(a2,b2,5,g2);
}
public void fillPolygon(int a [], int b [] ,int c,Graphics2D g2){
g2.fillPolygon(a,b,c);
}
}
Unfortunately I didn't locate an online swing runner to test it.
A: I modified Adder's answer (which worked as advertised) so it would work without using JApplet (which is also tagged as deprecated). I added some comments where different.
import java.awt.Color;
import java.awt.Dimension;
import java.awt.GradientPaint;
import java.awt.Graphics;
import java.awt.Graphics2D;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.SwingUtilities;
public class ColorRed extends JPanel {
private GradientPaint black;
private GradientPaint yellowOrange;
JFrame frame = new JFrame();
public static void main(String[] args) {
SwingUtilities.invokeLater(() -> new ColorRed().init());
}
public void init() {
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
// set the panel size
setPreferredSize(new Dimension(500, 500));
// add panel to frame.
frame.add(this);
// adjust frame and subcomponents
frame.pack();
// center on screen
frame.setLocationRelativeTo(null);
frame.setVisible(true);
setBlack(new GradientPaint(50, 20, Color.BLACK, 50, 50, Color.BLACK));
setYellowOrange(new GradientPaint(50, 20, Color.YELLOW, 50, 50, Color.RED));
}
public void setBlack(GradientPaint black) {
this.black = black;
}
public void setYellowOrange(GradientPaint yellowOrange) {
this.yellowOrange = yellowOrange;
}
// use paintComponent(g) and not paint(g)
public void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2 = (Graphics2D) g;
blackDiamond(g2, black);
redDiamond(g2, yellowOrange);
}
public void blackDiamond(Graphics2D g2, GradientPaint gradientPaint) {
int a[] = { 100, 50, 100, 150, 100
};
int b[] = { 10, 60, 110, 60, 10
};
g2.setPaint(gradientPaint);
fillPolygon(a, b, 5, g2);
}
public void redDiamond(Graphics2D g2, GradientPaint gradientPaint) {
int a2[] = { 100, 60, 100, 140, 100
};
int b2[] = { 20, 60, 100, 60, 20
};
g2.setPaint(gradientPaint);
fillPolygon(a2, b2, 5, g2);
}
public void fillPolygon(int a[], int b[], int c, Graphics2D g2) {
g2.fillPolygon(a, b, c);
}
} | unknown | |
d6582 | train | I would suggest using a different peer connection for each stream and in addition you should call stop() on the media stream. As part of the clean up between playback instances, you may also want to clear the link to the stream in the element like so:
if (moz) {
document.getElementById('yourvideoelementid').mozSrcObject = undefined;
} else {
document.getElementById('yourvideoelementid').src = "";
}
The moz indicates the browser is Mozilla/Firefox. | unknown | |
d6583 | train | You can bypass the certificate validation process by following code snieppet
ServicePointManager.ServerCertificateValidationCallback
+= (sender, certificate, chain, sslPolicyErrors) => true; | unknown | |
d6584 | train | If I understand you correctly, and others are interpreting your question differently, what you have is:
*
*A class with a property
*A category on that class
And you want to call a particular method automatically before any category method is called on a given instance, that method would "initialise" the category methods by modifying the property.
In other words you want the equivalent of a subclass with its init method, but using a category.
If my understanding is correct then the answer is no, there is no such thing as a category initializer. So redesign your model not to require it, which may be to just use a subclass - as that provides the behaviour you are after.
The long answer is you could have all the category methods perform a check, say by examining the property you intend to change to see if you have. If examining the property won't determine if an object has been "category initialized" then you might use an associated object (look in Apple's runtime documentation), or some other method, to record that fact.
HTH
Addendum: An even longer/more complex solution...
GCC & Clang both support a function (not method) attribute constructor which marks a function to be called at load time, the function takes no parameters and returns nothing. So for example, assume you have a class Something and a category More, then in the category implementation file, typically called Something+More.m, you can write:
__attribute__((constructor)) static void initializeSomethingMore(void)
{
// do stuff
}
(The static stops the symbol initializeSomethingMore being globally visible, you neither want to pollute the global name space or have accidental calls to this function - so you hide it.)
This function will be called automatically, much like a the standard class + (void) initialize method. What you can then do using the Objective-C runtime functions is replace the designated initializer instance methods of the class Something with your own implementations. These should first call the original implementation and then an initialize your category before returning the object. In outline you define a method like:
- (id) categoryVersionOfInit
{
self = [self categoryVersionOfInit]; // NOT a mistake, see text!
if (self)
{
// init category
}
return self;
}
and then in initializeSomethingMore switch the implementations of init and categoryVersionOfInit - so any call of init on an instance of Something actually calls categoryVersionOfInit. Now you see the reason for the apparently self-recursive call in categoryVersionOfInit - by the time it is called the implementations have been switched so the call invokes the original implementation of init... (If you're crosseyed at this point just draw a picture!)
Using this technique you can "inject" category initialization code into a class. Note that the exact point at which your initializeSomethingMore function is called is not defined, so for example you cannot assume it will be called before or after any methods your target class uses for initialization (+ initialize, + load or its own constructor functions).
A: Sure, it possible through objc/runtime and objc_getAssociatedObject/objc_setAssociatedObject
check this answer
A: No it's not possible in objective c.Category is the way to add only method to an existing class you can not add properties in to this.
Read this
Why can't I @synthesize accessors in a category? | unknown | |
d6585 | train | maybe you enable scrollability of textview by
1. in java code : TextView.setMovementMethod(new ScrollingMovementMethod());
2. in xml : android:scrollbars="vertical"
... but only first job enable scrollability text without second job
so answer is below
1. in java code : TextView.setMovementMethod(new ScrollingMovementMethod());
2. in xml : android:scrollbars="none"
delta is value of android:scrollbars vertical->none
A: In XML
android:background="@android:color/transparent"
Or in Kotlin Class
editText?.isVerticalScrollBarEnabled = false
In Java Class
A: you can use this code
enter code here
<ScrollView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:scrollbars="none">
<TextView
here your text
/>
</ScrollView>
A: implemented the OnTouchListener inside the adapter and set it on the text view he logic for touch event is: I check if the touch event is a tap or a swipe
@Override
public boolean onTouch(View v, MotionEvent motionEvent) {
switch (motionEvent.getAction()) {
case MotionEvent.ACTION_DOWN:
mIsScrolling = false;
mDownX = motionEvent.getX();
break;
case MotionEvent.ACTION_MOVE:
float deltaX = mDownX - motionEvent.getX();
if ((Math.abs(deltaX) > mSlop)) { // swipe detected
mIsScrolling = true;
}
break;
case MotionEvent.ACTION_CANCEL:
case MotionEvent.ACTION_UP:
if (!mIsScrolling) {
openNewScreen(v); // this method is used for click listener of the ListView
}
break;
}
return false;
}
A: android:scrollbarThumbVertical="@android:color/transparent"
Keep the scroll Set transparent | unknown | |
d6586 | train | I would try this out:
(?:\w+\W+){5}((?:\w.?)+)(?:\w+\W+){5}
Though natural language processing with regular expressions cannot be accurate.
A: ((?:[\w!@#$%&*]+\s+){5}([\w!@#$%&*]+\.)(?:\s+[\w!@#$%&*]+){5})
Try this.See demo.
https://regex101.com/r/aQ3zJ3/9 | unknown | |
d6587 | train | Need to add PasteSpecial Paste:=xlPasteValues
Next time try Recording a macro and modifying the code
Sheets("log").Range("A125:f1000").Copy
Sheets("data").Cells(Rows.Count, "A").End(xlUp).Offset(1). _
PasteSpecial Paste:=xlPasteValues, _
Operation:=xlNone, SkipBlanks:=False, Transpose:=False
A: Without using clipboard:
Sheets("data").Cells(Rows.Count, "A").End(xlUp).Offset(1).Value = Sheets("log").Range("A125:f1000").Value | unknown | |
d6588 | train | Yes, (depending on your task) it can matter quite a lot, which algorithm you choose.
You also can be sure, the mice developers wouldn't out effort into providing different algorithms, if there was one algorithm that anyway always performs best. Because, of course like in machine learning the "No free lunch theorem" is also relevant for imputation.
In general you can say, that the default settings of mice are often a good choice.
Look at this example from the miceRanger Vignette to see, how far imputations can differ for different algorithms. (the real distribution is marked in red, the respective multiple imputations in black)
The Predictive Mean Matching (pmm) algorithm e.g. makes sure that only imputed values appear, that were really in the dataset. This is for example useful, where only integer values like 0,1,2,3 appear in the data (and no values in between). Other algorithms won't do this, so while doing their regression they will also provide interpolated values like on the picture to the right ( so they will provide imputations that are e.g. 1.1, 1.3, ...) Both solutions can come with certain drawbacks.
That is why it is important to actually assess imputation performance afterwards. There are several diagnostic plots in mice to do this. | unknown | |
d6589 | train | All that's needed is a component for the parent that has a template <ui-view></ui-view>. Otherwise child has no place to render it's view. | unknown | |
d6590 | train | ModelSelect2Multiple from django-autocomplete-light seems perfect for your use case. | unknown | |
d6591 | train | You'll be looking at the Observer pattern or something similar. The gist of it is this: somewhere you have to keep a list (ArrayList suffices) of type "your interface". Each time a new object is created, add it to this list. Afterwards you can perform a loop on the list and call the method on every object in it.
I'll edit in a moment with a code example.
public interface IMyInterface {
void DoSomething();
}
public class MyClass : IMyInterface {
public void DoSomething() {
Console.WriteLine("I'm inside MyClass");
}
}
public class AnotherClass : IMyInterface {
public void DoSomething() {
Console.WriteLine("I'm inside AnotherClass");
}
}
public class StartUp {
private ICollection<IMyInterface> _interfaces = new Collection<IMyInterface>();
private static void Main(string[] args) {
new StartUp();
}
public StartUp() {
AddToWatchlist(new AnotherClass());
AddToWatchlist(new MyClass());
AddToWatchlist(new MyClass());
AddToWatchlist(new AnotherClass());
Notify();
Console.ReadKey();
}
private void AddToWatchlist(IMyInterface obj) {
_interfaces.Add(obj);
}
private void Notify() {
foreach (var myInterface in _interfaces) {
myInterface.DoSomething();
}
}
}
Output:
I'm inside AnotherClass
I'm inside MyClass
I'm inside MyClass
I'm inside AnotherClass
Edit: I just realized you tagged it as Java. This is written in C#, but there is no real difference other than the use of ArrayList instead of Collection.
A: An interface defines a service contract. In simple terms, it defines what can you do with a class.
For example, let's use a simple interface called ICount. It defines a count method, so every class implementing it will have to provide an implementation.
public interface ICount {
public int count();
}
Any class implementing ICount, should override the method and give it a behaviour:
public class Counter1 implements ICount {
//Fields, Getters, Setters
@Overide
public int count() {
//I don't wanna count, so I return 4.
return 4;
}
}
On the other hand, Counter2 has a different oppinion of what should count do:
public class Counter2 implements ICount {
int counter; //Default initialization to 0
//Fields, Getters, Setters
@Overide
public int count() {
return ++count;
}
}
Now, you have two classes implementing the same interface, so, how do you treat them equally? Simple, by using the first common class/interface they share: ICount.
ICount count1 = new Counter1();
ICount count2 = new Counter2();
List<ICount> counterList = new ArrayList<ICount>();
counterList.add(count1);
counterList.add(count2);
Or, if you want to save some lines of code:
List<ICount> counterList = new ArrayList<ICount>();
counterList.add(new Counter1());
counterList.add(new Counter2());
Now, counterList contains two objects of different type but with the same interface in common(ICounter) in a list containing objects that implement that interface. You can iterave over them and invoke the method count. Counter1 will return 0 while Counter2 will return a result based on how many times did you invoke count:
for(ICount current : counterList)
System.out.println(current.count());
A: You can't call a method from all the objects that happen to implement a certain interface at once. You wouldn't want that anyways. You can, however, use polymorphism to refer to all these objects by the interface name. For example, with
interface A { }
class B implements A { }
class C implements A { }
You can write
A b = new B();
A c = new C();
A: Interfaces don't work that way. They act like some kind of mask that several classes can use. For instance:
public interface Data {
public void doSomething();
}
public class SomeDataStructure implements Data {
public void doSomething()
{
// do something
}
}
public static void main(String[] args) {
Data mydataobject = new SomeDataStructure();
}
This uses the Data 'mask' that several classes can use and have certain functionality, but you can use different classes to actually implement that very functionality.
A: The crux would be to have a list that stores every time a class that implements the interface is instantiated. This list would have to be available at a level different that the interface and the class that implements it. In other words, the class that orchestrates or controls would have the list.
An interface is a contract that leaves the implementation to the classes that implements the interface. Classes implement the interface abide by that contract and implement the methods and not override them.
Taking the interface to be
public interface Model {
public void onUpdate();
public void onClick();
}
public class plugin implements Model {
@Override
public void onUpdate() {
System.out.println("Pluging updating");
}
@Override
public void onClick() {
System.out.println("Pluging doing click action");
}
}
Your controller class would be the one to instantiate and control the action
public class Controller {
public static void orchestrate(){
List<Model> modelList = new ArrayList<Model>();
Model pluginOne = new plugin();
Model plugTwo = new plugin();
modelList.add(pluginOne);
modelList.add(plugTwo);
for(Model model:modelList){
model.onUpdate();
model.onClick();
}
}
}
You can have another implementation called pluginTwo, instantiate it, add it to the list and call the methods specified by the interface on it. | unknown | |
d6592 | train | There are a couple of problems here.
First of all, KeyBindings will work only if currently focused element is located inside the element where KeyBindings are defined. In your case you have a ListBoxItem focused, but the KeyBindings are defined on the child element - TextBlock. So, defining a KeyBindings on a TextBlock will not work in any case, since a TextBlock cannot receive focus.
Second of all, you probably need to know what to copy, so you need to pass the currently selected log item as parameter to the Copy command.
Furthermore, if you define a ContextMenu on a TextBlock element it will be opened only if your right-click exactly on the TextBlock. If you click on any other part of the list item, it will not open. So, you need to define the ContextMenu on the list box item itself.
Considering all of that, what I believe you are trying to do can be done in the following way:
<ListBox ItemsSource="{Binding Logs, Mode=OneWay}"
x:Name="logListView"
IsSynchronizedWithCurrentItem="True">
<ListBox.InputBindings>
<KeyBinding Key="C"
Modifiers="Ctrl"
Command="Copy"
CommandParameter="{Binding Logs/}" />
</ListBox.InputBindings>
<ListBox.CommandBindings>
<CommandBinding Command="Copy"
Executed="CopyLogExecuted"
CanExecute="CanExecuteCopyLog" />
</ListBox.CommandBindings>
<ListBox.ItemContainerStyle>
<Style TargetType="{x:Type ListBoxItem}">
<Setter Property="ContextMenu">
<Setter.Value>
<ContextMenu>
<MenuItem Command="Copy"
CommandParameter="{Binding}" />
</ContextMenu>
</Setter.Value>
</Setter>
</Style>
</ListBox.ItemContainerStyle>
<ListBox.ItemTemplate>
<DataTemplate>
<TextBlock Text="{Binding}" />
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
Here we define a KeyBinding on the ListBox itself and specify as a CommandParameter currently selected list box item (log entry).
CommandBinding is also defined at the ListBox level and it is a single binding for both right click menu and the keyboard shortcut.
The ContextMenu we define in the style for ListBoxItem and bind CommandParameter to the data item represented by this ListBoxItem (log entry).
The DataTemplate just declares a TextBlock with binding to current data item.
And finally, there is only one handler for the Copy command in the code-behind:
private void CopyLogExecuted(object sender, ExecutedRoutedEventArgs e) {
var logItem = e.Parameter;
// Copy log item to the clipboard
}
private void CanExecuteCopyLog(object sender, CanExecuteRoutedEventArgs e) {
e.CanExecute = true;
}
A: Thanks to Pavlov Glazkov for explaining that the key and command bindings need to go at the ListBox level, rather than the item template level.
This is the solution I now have:
<ListBox ItemsSource="{Binding Logs, Mode=OneWay}">
<ListBox.InputBindings>
<KeyBinding Key="C"
Modifiers="Ctrl"
Command="Copy"/>
</ListBox.InputBindings>
<ListBox.CommandBindings>
<CommandBinding Command="Copy"
Executed="KeyCopyLog_Executed"
CanExecute="CopyLog_CanExecute"/>
</ListBox.CommandBindings>
<ListBox.ItemTemplate>
<DataTemplate>
<TextBlock Text="{Binding Path=.}">
<TextBlock.ContextMenu>
<ContextMenu>
<MenuItem Command="Copy">
<MenuItem.CommandBindings>
<CommandBinding Command="Copy"
Executed="MenuCopyLog_Executed"
CanExecute="CopyLog_CanExecute"/>
</MenuItem.CommandBindings>
</MenuItem>
</ContextMenu>
</TextBlock.ContextMenu>
</TextBlock>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
Where KeyCopyLog_Executed is:
private void KeyCopyLog_Executed(object sender, System.Windows.Input.ExecutedRoutedEventArgs e)
{
if ((sender as ListBox).SelectedItem != null)
{
LogItem item = (LogItem) (sender as ListBox).SelectedItem;
Clipboard.SetData("Text", item.ToString());
}
}
and MenuCopyLog_Executed is:
private void MenuCopyLog_Executed(object sender, System.Windows.Input.ExecutedRoutedEventArgs e)
{
LogItem item = (LogItem) ((sender as MenuItem).TemplatedParent as ContentPresenter).DataContext;
Clipboard.SetData("Text", item.ToString());
} | unknown | |
d6593 | train | You can try with this code
using (StreamReader sr = new StreamReader(yourPath))
{
//This is an arbitrary size for this example.
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[5];//Read block of 5 characters
sr.Read(c, 0, c.Length);
Console.WriteLine(c); //print block
}
}
Link : http://msdn.microsoft.com/en-us/library/9kstw824.aspx
A: line = buffer.ToString();
This statement should be to blame. buffer is a char array, and its ToString() methods just return System.char[].
A: Use: line= new string(buffer);
Instead | unknown | |
d6594 | train | You should stop trying to runRandom inside your functions. You should only use runRandom once you actually want a result (for example - to print the result, since you can't do this inside the monad). Trying to 'escape' from the monad is a futile task and you will only produce confusing and often non-functioning code. The final output of all of your functions will be inside the monad, so you don't need to escape anyways.
Note that
gen <- get
let n = runRandom (randR(1, length (deck))) gen :: Int
is exactly equivalent to
n <- randR (1, length deck)
The <- syntax executes a computation in monad on the right and 'puts' it into the variable name on the left.
Shuffling:
shuffleR [] = return []
shuffleR xs = do
(y:ys) <- removeR xs -- 1
zs <- shuffleR ys -- 2
return (y:zs) -- 3
The function is straightforward recursion:
1) extract a random element, 2) shuffle what is left, 3) combine the results.
edit: extra info requested:
randSum :: (Num b, Random b) => State StdGen b
randSum = do
a <- randR (1,6)
b <- randR (1,6)
return $ a + b
compiles just fine. Judging from your description of the error, you are trying to call this function inside the IO monad. You cannot mix monads (or at least not so simply). If you want to 'execute' something of type RandState inside of IO you will indeed have to use runRandom here.
n <- randR (1, length deck) makes n an Int because length deck has type Int and randR :: Random a => (a, a) -> RandState a, so from the context we can infer a ~ Int and the type unifies to (Int, Int) -> RandState Int.
Just to recap
Wrong:
try = do
a <- randomIO :: IO Int
b <- randR (0,10) :: RandState Int
return $ a + b -- monads don't match!
Right:
try = do
a <- randomIO :: IO Int
let b = runRandom (randR (0,10)) (mkStdGen a) :: Int -- 'execute' the randstate monad
return $ a + b | unknown | |
d6595 | train | I am not sure if this solves your problem. But it looks like typical need of View instead of direct table fetch.
In View you can control which all columns to be read or not to be read.
A: There's no way of Hibernate-read protection. You can protect fileds from beeing updated or inserted using declarations (insertable = false, updatable = false).
If you want not to give some fields to the user you should use high-level logic, like filtering fields in your Json.
A: I have found solution. In entity class, if I join column I can simply put annotation @JsonIgnoreProperties({"prop1", "prop2"}), or if it is standard type property @JsonIgnore is enough. | unknown | |
d6596 | train | The error message is telling you what to do:
function call missing argument list;
use '&MotionThread::MoveProjectile' to create a pointer to member
^
Therefore, here's the correct syntax:
Thread^ MotionThread1 = gcnew Thread(
gcnew ParameterizedThreadStart(MoveProj, &MotionThread::MoveProjectile));
^
For the other one, you're currently trying to create a delegate, without telling it what method the delegate should point to. Try something like this:
Thread^ MainThread = gcnew Thread(gcnew ThreadStart(this, &MyClass::MainMethod));
^^^^^^^^^^^^^^^^^^^^^^^^^^
Edit
I didn't read through your full code. If you expect people to take their time to help you, you need to take some time & spend the effort to distill it down to just what's needed.
I will, however, comment on the errors you're getting.
error C2440: 'initializing' :
cannot convert from 'MotionThread' to 'MotionThread ^'
You've got a variable somewhere that's a reference type, but you're using it without the ^. This is valid C++/CLI, but none of the managed APIs will work with that easily. Switch the member to a ^ and use gcnew.
error C3352: 'float Allformvariables::CalcCurrentVelocity(System::Object ^)' :
the specified function does not match the delegate type 'void (void)'
As the error message says: You're trying to construct a delegate that doesn't take any parameters and returns void, and the method you're passing doesn't match that. Either fix the method or switch to a different delegate type.
error C3754: delegate constructor: member function 'MotionThread::MoveProjectile'
cannot be called on an instance of type 'MotionThread'
I have a feeling this one will go away when you add the missing ^ that I mentioned above. | unknown | |
d6597 | train | Well, turns out that a good night sleep and a cold shower made me rethink the whole issue.
I'm still very new to the concept of mocking, so it still hasn't sunk in quite right.
The thing is, there's no need to override the patch to a mocked object. It's a mocked object and that means I can make it do anything. So my first try was:
@mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
def test_override(self):
method_to_patch.return_value = 2
(....)
That worked, but had the side effect of changing the return value for all following tests. So then I tried:
@mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
def test_override(self):
method_to_patch.return_value = 2
(....)
method_to_patch.return_value = 1
And it worked like a charm. But seemed like too much code. So then I went the down the road of context management, like this:
@mock.patch('method_to_patch', mock.Mock(return_value=1))
class Tests(TestCase):
def test_override(self):
with mock.patch('method_to_patch', mock.Mock(return_value=2):
(....)
I think it seems clearer and more concise.
About the order in which the patch decorators were being applied, it's actually the correct order. Just like stacked decorators are applied from the bottom up, a method decorator is supposed to be called before the class decorator. I guess it makes sense, I was just expecting the opposite behavior.
Anyway, I hope this help some poor newbie soul like mine in the future. | unknown | |
d6598 | train | Use count(*) or count(1):
SELECT COUNT(*) AS "Count of each Grade",
GradeGiven,
COUNT(*) * 100.0/(SELECT COUNT(*) from StudentGrades) AS "Percentage"
FROM StudentGrades
GROUP BY GradeGiven;
Confusion over count(<column name>) is why I don't think it should be used, at least by beginners in SQL. You can read more about my opinion in this matter here. | unknown | |
d6599 | train | The built in with-open works on anything you can call .close on, so the normal approach is to use something like:
(with-open [connections (create-connections)]
(do-stuff connections))
and handle errors opening connections within the code that failed to open them. If create-connections fails to open one of the connections then perhaps a try ... finally block within create-connections is a cleaner place to handle that sort of error condition. | unknown | |
d6600 | train | Not sure if I understand what you need. As I understand your code, status.replies.all().update(has_read=True) doesn't change status but only changes the replies. If that's true, the code should do what you want. If it isn't, you could make a copy of status and return the copy:
if status.user == current_user:
old_status = status.make_copy()
status.replies.all().update(has_read=True)
return old_status
return status
Or do you just want the method to return early and do the database update asynchronously? Then you should have a look at celery and maybe this nice explanation. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.