_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d19601 | test | Update Laravel Mix
npm install --save-dev laravel-mix@latest
Update Your NPM Scripts
If your build throws an error such as Unknown argument: --hide-modules, the scripts section of your package.json file will need to be updated. The Webpack 5 CLI removed a number of options that your NPM scripts was likely referencing.
While you're at it, go ahead and switch over to the new Mix CLI.
Before
"scripts": {
"dev": "npm run development",
"development": "cross-env NODE_ENV=development node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"watch": "npm run development -- --watch",
"watch-poll": "npm run watch -- --watch-poll",
"hot": "cross-env NODE_ENV=development node_modules/webpack-dev-server/bin/webpack-dev-server.js --inline --hot --disable-host-check --config=node_modules/laravel-mix/setup/webpack.config.js",
"prod": "npm run production",
"production": "cross-env NODE_ENV=production node_modules/webpack/bin/webpack.js --no-progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
After
"scripts": {
"dev": "npm run development",
"development": "mix",
"watch": "mix watch",
"watch-poll": "mix watch -- --watch-options-poll=1000",
"hot": "mix watch --hot",
"prod": "npm run production",
"production": "mix --production"
},
A: I had this issue and solved it by downgrading laravel-mix to
"laravel-mix": "^5.0.9"
then running:
npm install
A: Remove --hide-modules from your package.json and than run npm run dev it will run without erros.
A: In my case i had to switch to node version 14 as i was on 18.
My steps.
(1) Check current node version.
nvm list # -->v18.4
(2) Switched to node 14.
nvm use 14.19
(3) Installed again
npm install
(4) Run dev
npm run dev
It worked. | unknown | |
d19602 | test | Your problem is here:
<form action="return addCustomer(this.add_LN, this.add_FN, this.add_PN, this.add_DOB);">
You don't want the form action to be a javascript call, you need to do this onclick or something.
The arguments above by @thiefmaster and @martin are correct: using jquery is much simpler. You are right about the framework thing, but in this case you need to go and code for a lot of troubles (responses, when is there a successfull AJAX call, different browsers etc)
A: xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
// put your codes here like
document.getElementById("txtHint").innerHTML=xmlhttp.responseText;
// document.getElementById("show_label").innerHTML='text as you wish like updated enjoy';
}
}
And add a label with the id of show_label | unknown | |
d19603 | test | I think the problem is that the CollectionView doesn't listen for the PropertyChanged-Events from its elements and also RaisePropertyChanged(nameof(this.Bilder)); dosen't work because the CollectionView is not really changed.
I would recomend to create the CollectionView in code via CollectionViewSource.GetDefaultView(list). So you can control the CollectionView from your model and call ICollectionView.Refresh if needed.
A: In your Methods, create a new Collection and add it to "Bilder". Just raising the PropertyChanged will execute an evaluation for referential equality. If it is the same - which it will be, if you just move items inside around - it will not update the DataGrid.
If you are not using the ObservableCollections attributes, like automatically updates, when items are added or removed, you might also change it to a "normal" List.
private void MoveSeiteUp()
{
const int smallestReihenfolge = 1;
if (this.SelectedBild.Reihenfolge > smallestReihenfolge) {
var bildToSwapReihenfolgeWith = this.Bilder.Single(b => b.Reihenfolge == this.SelectedBild.Reihenfolge - 1);
this.SelectedBild.Reihenfolge--;
bildToSwapReihenfolgeWith.Reihenfolge++;
this.Bilder = new ObservableCollection<BildNotifiableModel> (this.Bilder);
RaisePropertyChanged(nameof(this.Bilder));
}
}
private void MoveSeiteDown()
{
if (this.SelectedBild.Reihenfolge < MaxAllowedImages) {
var bildToSwapReihenfolgeWith = this.Bilder.Single(b => b.Reihenfolge == this.SelectedBild.Reihenfolge + 1);
this.SelectedBild.Reihenfolge++;
bildToSwapReihenfolgeWith.Reihenfolge--;
this.Bilder = new ObservableCollection<BildNotifiableModel> (this.Bilder);
RaisePropertyChanged(nameof(this.Bilder));
}
} | unknown | |
d19604 | test | You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method.
Once you identify keypoints on all your faces you can morph them into a single reference and work from there.
A: The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face.
You can found explication in this document : PCA Aligment
A: Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox.
To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details. | unknown | |
d19605 | test | You can place a div with that img as background-image and set 2 buttons/divs with position absolute.
Like this:
.home {
background-color:red;
width:100%;
height:100vh;
position:relative;
}
.btn1 {
width:200px;
height:200px;
background-color:blue;
position:absolute;
top:20%;
left:30%;
}
.btn2 {
width:200px;
height:200px;
background-color:green;
position:absolute;
top:20%;
left:50%;
}
<div class="home">
<a class="btn1" href="#">
page one
</a>
<a class="btn2" href="#">
page two
</a>
</div> | unknown | |
d19606 | test | Turns out I was wrong about the problem. I'm working in a Rails app, and I'm rendering my charts via Erb templates, with the title as a parameter.
Solution was to pass that parameter like this raw(title_string), so the single quote character is escaped. Issue was about Erb, not Google Chart.
WhiteHat comment helped me realise that, thanks :)
A: Appreciate this is an old question, but as I too have had this issue, with title and column values I pop in my solution.
If your title is
title: 'my title's great'
then you need to escape the ' in the title with a backslash \ . It becomes
title: 'my title\'s great'
If your title is
title: "my title's "great""
then you need to escape the " in the title with a backslash \ . It becomes
title: "my title's \"great\"".
What I have started to do is put \ before all specials. Some systems may need to you to use double backslash in your code for the single backslash to make it to the google chart. | unknown | |
d19607 | test | Yes you can show a form on a service's desktop. It will not be shown to any logged in user, in fact in Vista and later OSes you cannot show it to a user even if you set the service to 'interactive'. Since the desktop is not interactive the windows messages the form receives will be slightly different but the vast majority of the events should be triggered the same in a service as they would be on an interactive desktop (I just did a quick test and got the form load, shown, activated and closing events).
One thing to remember is that in order to show a form your thread must be an STA thread and a message loop must be created, either by calling ShowDialog or Applicaton.Run. Also, remember all external interaction with the form will need to be marshaled to the correct thread using Invoke or BeginInvoke on the form instance.
This is certainly very doable but is really not recommended at all. You must be absolutely sure that the form and any components it contains will not show any unexpected UI, such as a message box, under any circumstances. The only time this method can really be justified is when you are working with a dubious quality legacy or 3rd party tool that requires handle creation in order to function properly. | unknown | |
d19608 | test | Looks like they've fixed their documentation for UIDeviceOrientationLandscapeRight:
The device is in landscape mode, with the device held upright and the Home button on the right side. | unknown | |
d19609 | test | I may want to return a copy of the original BitmapImage rather than modifying the original.
There is no good method to directly copy a BitmapImage, but we can reuse StorageFile for several times.
If you just want to select a picture, show it and in the meanwhile show the re-sized picture of original one, you can pass the StorageFile as parameter like this:
public static async Task<BitmapImage> ResizedImage(StorageFile ImageFile, int maxWidth, int maxHeight)
{
IRandomAccessStream inputstream = await ImageFile.OpenReadAsync();
BitmapImage sourceImage = new BitmapImage();
sourceImage.SetSource(inputstream);
var origHeight = sourceImage.PixelHeight;
var origWidth = sourceImage.PixelWidth;
var ratioX = maxWidth / (float)origWidth;
var ratioY = maxHeight / (float)origHeight;
var ratio = Math.Min(ratioX, ratioY);
var newHeight = (int)(origHeight * ratio);
var newWidth = (int)(origWidth * ratio);
sourceImage.DecodePixelWidth = newWidth;
sourceImage.DecodePixelHeight = newHeight;
return sourceImage;
}
In this scenario you just need to call this task and show the re-sized image like this:
smallImage.Source = await ResizedImage(file, 250, 250);
If you want to keep the BitmapImage parameter due to some reasons (like the sourceImage might be a modified bitmap but not directly loaded from file), and you want to re-size this new picture to another one, you will need to save the re-sized picture as a file at first, then open this file and re-size it again. | unknown | |
d19610 | test | You should do the fetching in Filter
import React, { Component } from 'react';
import fetch from 'isomorphic-fetch';
export class Filter extends Component {
state = {
data: null
}
componentWillMount() {
fetch('./pizza.json')
.then(function (response) {
return response.json()
}).then(function (json) {
console.log('parsed json', json)
this.setState(() => ({ data: json }))
}).catch(function (ex) {
console.log('parsing failed', ex)
});
}
render() {
const { data } = this.state
if(!data){
return <div>Loading...</div>
}
return (
<div>
<h1> Pizza Search App </h1>
(use data here...)
</div>
);
}
}
A: Alex is correct except you need to set the state once you've got the response:
EDIT: I missed that he had another link in his promise chain down there... either way, you only need the one. Like so:
componentWillMount() {
fetch(...).then(res => this.setState({data: res.json()})).catch(....
Also, you need to 'stringify' the json in order to display it in the render method of your component. You're not able to display raw objects like that. Sooo... you'll need to do something like
...
render() {
const { data } = this.state
return (
<div>
<pre>
<code>
{JSON.stringify(data, null, 2)} // you should also look into mapping this into some kind of display component
</code>
</pre>
</div>
)
} | unknown | |
d19611 | test | The regex description on MSDN:
http://msdn.microsoft.com/en-us/library/bb982382.aspx
Basically, you create "basic_regex" objects, then call the "regex_match" or "regex_replace" functions
A: You shouldn't need any specific headers beyond <regex> to include the tr1 functionality. To get start using Regular Expressions in tr1 I suggest reading: http://www.johndcook.com/cpp_regex.html | unknown | |
d19612 | test | If you are connecting to the right database everything seems fine to me.. I had a similar problem a few weeks ago and the accepted answer of this question fixed my issue.
Here are the steps to run:
rake db:drop:all
rake db:create:all
rake db:migrate
I hope it will fix your problem.
WARNING: this will erase your database.
A: Could you please tell which OS you got?
Delete the line:
socket: /tmp/mysql.sock
and run:
db:migrate
Give the output of:
db:migrate:status
If this is not working for you, you could also try to add:
host: 127.0.0.1
to your database.yml file
A: If nothing stated above works please do check your schema.rb for migration contents. If migration contents are already there then just do the below command in production:
rails db:schema:load RAILS_ENV=production. | unknown | |
d19613 | test | I would use Automapper to map the two classes together in one call. This type of situation is what it was designed for. All you would need to do is create a mapping configuration and then map the two classes. It's a very simple but powerful tool. e.g. Something like this:
var config = new MapperConfiguration(cfg => cfg.CreateMap<APICall.Group, modGroup>()
.ForMember(dest => dest.modID, s=>s.MapFrom(s=>s.Id));
// etc create more mappings here
);
var mapper = config.CreateMapper();
List<modGroup> modGroupList = mapper.Map<List<modGroup>>(groupData); | unknown | |
d19614 | test | Sounds like all you need is:
awk '/Program X output/ && c++{exit} 1' file
e.g.
$ seq 50 | awk '/2/ && c++{exit} 1'
1
2
3
4
5
6
7
8
9
10
11
If that's not all you need then edit your question to clarify your requirements and show us concise, testable sample input and expected output.
A: Keep track of how often you see the keywords, and print only when this count is an odd number:
awk '/Program X output/ {n++} n%2 == 1' <<END
Program X output
a
b
c
Program X output
d
e
Program X output
f
g
h
i
j
Program X output
m
n
o
END
Program X output
a
b
c
Program X output
f
g
h
i
j
A: This might work for you (GNU sed):
sed -r '/Program X output/{x;s/^/x/;x};G;/\n(x{2})*$/!P;d' file
When encountering a header line, add 1 to a counter in the hold space (HS). Append the HS to every line and only print the first line in the pattern space (PS) if the counter is a multiple of the required amount. | unknown | |
d19615 | test | I can certainly confirm that this way will create A LOT OF PROBLEMS later while using filtering etc.
In general, such an approach is critically against the very relational database architecture. It's ok as long as you treat your database as a silly key-value storage, but absolutely unacceptable if you are going to use your database as a database.
The main rule for the database structure should be: each entity have to be stored separately. this way it will be accessible using standard SQL mechanisms.
One of the possible ways to solve your current case, is creating a table with three columns:
user_id, param_name, param_value | unknown | |
d19616 | test | 63 AA:63 EV: Fnum:6.3
2
00:00:02,000 --> 00:00:03,000
HOME(11.6488,51.7185) 2016.06.02 13:19:12
GPS(11.6488,51.7185,17) BAROMETER:3.5
ISO:100 Shutter:400 A:63 AA:63 EV: Fnum:6.3
and this is how i would like them to look like afterwards
3
00:00:03,000 --> 00:00:04,000
BAROMETER:4.3
4
00:00:04,000 --> 00:00:05,000
BAROMETER:5.3
A: grep -oE "^[0-9].*$|^$|BAROMETER.*$" input-file
EDIT: Based on your comment it seems that you want to do it for every file in a folder and replace in-place. In this case it is better to use find and sed, e.g.
find "$@" -type f -name "*.srt" -exec \
sed -i.bak '/^[0-9]\|BAROMETER\|^$/!d;s/^.*\(BAROMETER\)/\1/' {} \;
I have never used Automator for anything really, but there is quite detailed examples in the documentation. | unknown | |
d19617 | test | Spring is inferring the value to be a number. You can force the value to be treated as a string in YAML config by quoting it ie "845216416540"
This answer covers YAML convention in detail: https://stackoverflow.com/a/22235064/13172778 | unknown | |
d19618 | test | Perhaps though I have missunderstod what to supply to the function call - isn't it the window width/height?
How in the world you think that the window resolution influences texture sizes is beyond me. You normally render shadow mapping depth maps using a framebuffer object, so the window dimensions are irrelevant.
What are the parameters supposed to be?
For a cube map: The edge length of the cube map texture. | unknown | |
d19619 | test | len(pairs[]) raises a SyntaxError because the square brackets are empty:
>>> pairs = [('cheese', 'queso'), ('red', 'rojo'), ('school', 'escuela')]
>>> pairs[]
File "<stdin>", line 1
pairs[]
^
SyntaxError: invalid syntax
>>>
You need to tell Python where to index the list pairs:
>>> pairs = [('cheese', 'queso'), ('red', 'rojo'), ('school', 'escuela')]
>>> pairs[0] # Remember that Python indexing starts at 0
('cheese', 'queso')
>>> pairs[1]
('red', 'rojo')
>>> pairs[2]
('school', 'escuela')
>>> len(pairs[0]) # Length of tuple at index 0
2
>>> len(pairs[1]) # Length of tuple at index 1
2
>>> len(pairs[2]) # Length of tuple at index 2
2
>>>
I think it would be beneficial for you to read An Introduction to Python Lists and Explain Python's slice notation. | unknown | |
d19620 | test | You can use Windows Management Instrumentation:
If you have not used wmic before you should install it by running wmic from cmd.exe.
It should then say something like:
WMIC Installing... please wait.
After that wmic is ready for use:
function getProcessId( $imagename ) {
ob_start();
passthru('wmic process where (name="'.$imagename.'") get ProcessId');
$wmic_output = ob_get_contents();
ob_end_clean();
// Remove everything but numbers and commas between numbers from output:
$wmic_output = preg_replace(
array('/[^0-9\n]*/','/[^0-9]+\n|\n$/','/\n/'),
array('','',','),
$wmic_output );
if ($wmic_output != '') {
// WMIC returned valid PId, should be safe to convert to int:
$wmic_output = explode(',', $pids);
foreach ($wmic_output as $k => $v) { $wmic_output[$k] = (int)$v; }
return $wmic_output;
} else {
// WMIC did not return valid PId
return false;
}
}
// Find out process id's:
if ($pids = getProcessId( "chrome.exe" )) {
foreach ($pids as $pid) {
echo "Chrome.exe is running with pid $pid";
}
} else {
echo "Chrone.exe is not running";
}
I have not tested this and just wrote it out of my head so there might be some fixing and you should check wmic's output by running it from commandline with same args to see if preg_replace() is doing it right (get pid from wmic's output).
UPDATE:
Tested and it seems that wmic does not return any status codes so updated my php function to reflect this bahavior.
UPDATE:
Now it handles multiple processes too and returns all pids as indexed array or false when no process running.
About WMI:
Windows Management Instrumentation is very powerful interface and so is wmic commandline tool. Here is listed some of WMI features | unknown | |
d19621 | test | Apparently, a high amount of qpushbuttons are "expensive" and slow the program down. So, there seems no way to generate 10,000 to 20,000 qpushbuttons at once without delay.
What worked however, was to only show the visible pushbuttons and generate new buttons when they are visible in the window. | unknown | |
d19622 | test | Well I am not expert in entity framework, but I am answering in terms of repository and unit of work.
To begin with, avoid unnecessary wrapper of additional generic repository as you are already using full-ORM. Please refer to this answer.
but in .NET Core i do not have a DbContextTransaction.
The DbContextTransaction is important but not a key for implementing unit of work in this case. What is important is DBContext. It is DBContext that tracks and flushes the changes. You call SaveChanges on DBContext to notify that you are done.
I would want to make this transactional
I am sure there must be something available to replace DbContextTransaction or to represent transaction.
One way suggested by Microsoft is to use it as below:
context.Database.BeginTransaction()
where context is DbContext.
Other way is explained here.
also ,avoid using SaveChangesAsync for all repos that get involved
That is possible. Do not put SaveChanges in repositories. Put it in separate class. Inject that class in each concrete/generic repository. Finally, simply call SaveChanges once when you are done. For sample code, you can have a look at this question. But, code in that question have a bug which is fixed in the answer I provided to it. | unknown | |
d19623 | test | Multiprocessing could help but this sounds more like a threading problem. Any IO implementation should be made asynchronous, which is what threading does. Better, in python3.4 onwards, you could do asyncio.
https://docs.python.org/3.4/library/asyncio.html
If you have python3.5, this will be useful: https://docs.python.org/3.5/library/asyncio-task.html#example-hello-world-coroutine
You can mix asyncio with multiprocessing to get the optimized result. I use in addition joblib.
import multiprocessing
from joblib import Parallel, delayed
def parallelProcess(i):
for index, label_number in enumerate(label_array):
if index % i == 0:
call_api_async(domain, api_call_1, api_call_2, label_number, api_key)
if __name__=="__main__":
num_cores_to_use = multiprocessing.cpu_count()
inputs = range(num_cores_to_use)
Parallel(n_jobs=num_cores_to_use)(delayed(parallelProcess)(i) for i in inputs) | unknown | |
d19624 | test | Formally, C doesn't have pass by reference. It only has pass by value.
You can simulate pass by reference in two ways:
*
*You can declare a function parameter as a pointer, and explicitly use & to pass a pointer to an object in the caller.
*You can declare a function parameter as a pointer, and pass an array, since when you try to pass an array, what actually happens is that the array's value "decays" into a pointer to its first element. (And you can also declare the function parameter as an array, to make it look even more like there's an array being passed, but you're actually declaring a pointer in any case.)
So it's more a question of which parameters you think of as being passed by reference.
You'll often hear it said that "arrays are passed by reference in C", and this isn't false, but arguably it isn't strictly true, either, because (as mentioned) what's actually happening is that a pointer to the array's first element is being passed, by value.
A: The pedantic answer is that all of them are passed by value, but sometimes that value is an address.
More usefully, assuming there's no errors in what you have here, all of these functions are pass-by-address which is as close as C gets to pass-by-reference. They take in a pointer as an argument and they may dereference that pointer to write data to wherever it points. Only the call to g() is safe with just these lines because the other pointers are undefined.
This is the standard in C APIs for in/out parameters which don't need to be reallocated.
Functions that need to return a new variable-sized buffer usually take in a pointer to a pointer, so they may dereference the argument to get an address in the caller to which they write a pointer to a buffer they allocate, or which is a static variable of the function.
As mentioned in the comments it's possible to just return a pointer from a function. Memory allocation wouldn't work in C without it. But it's still pretty common to see functions that have size_t ** parameters or similar for returning pointers. That might be mostly a Unix pattern where almost every function returns a success/error code so the actual function output gets shifted to the arguments.
A: To make your question more clear let's initialize the variables
int b;
int *a = &b;
int **c = &a;
Then this call
f(a);
passes the object b by reference through the pointer a. The pointer itself is passed by value that is the function deals with a copy of the value of the pointer a.
This call
g(&b);
in fact is equivalent to the previous call relative to the accepted value. The variable b is passed by reference.
And this call also equivalent to the previous call
h(*c);
the variable b is passed by reference through the pointer *c value of which is equal to the value of the pointer a.
So all three functions accept the variable b by reference.
A: The easiest way to tell is in the size of the arithmetic, but this is not so for the char type. For address arithmetic, the sizeof operator is invoked which is not the case for ordinary arithmetic as in the type of unsigned int.
#include <iostream>
using namespace std;
int main() {
unsigned int ui, *uiPtr;
ui = 11;
cout << ui << endl;
ui += 1; //increases value by 1
cout << ui << endl;
uiPtr = &ui;
cout << uiPtr << endl;
uiPtr += 1; //increases value by sizeof
cout << uiPtr << endl;
return 0;
} | unknown | |
d19625 | test | You could calculate the empty space between blue div and pink div with a difference ($('.blue-container').height() - $('.blue').height()), then when the document scrolled till that misure you know that the pink div has touch the blue one.
$(function(){
$(window).scroll(function(){
var margin = $('.blue-container').height() - $('.blue').height();
if($(this).scrollTop()>=margin){
$("body").addClass("orange")
} else{
$("body").removeClass("orange")
}
});
});
body {
margin:0;
background:lightblue;}
.blue-container {
height:70vh;
}
.blue {
height:40vh;
position:sticky;
width:70%;
top:0;
background:blue;
margin:auto;
}
.pink {
height:500px;
position:relative;
width:70%;
margin-right:auto;
margin-left:auto;
background:pink;
text-align:center;
}
.orange{
background:orange
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class='blue-container'>
<div class='blue'></div>
</div>
<div class='pink'> When I touch the blue bloc, I would like the 'body background' change into an other color (for exemple : orange)</div>
A: I dont know if you can to that in pure CSS, but here is a vanillaJS solution:
*
*We trigger an event on scroll
*We check blue's bottom position and pink's top position
*If they are equals, we trigger the logic.
var blue = document.querySelector('.blue')
var pink = document.querySelector('.pink')
var onTouch = false;
//http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html#event-type-scroll
function onScrollEventHandler(ev) {
var bottom = blue.getBoundingClientRect().bottom
var top = pink.getBoundingClientRect().top
// we set
if (bottom == top && !onTouch) {
document.body.style.backgroundColor = 'orange'
onTouch = true
}
// we reset
if (bottom != top && onTouch) {
document.body.style.backgroundColor = 'lightblue'
onTouch = false
}
}
var el=window;
if(el.addEventListener)
el.addEventListener('scroll', onScrollEventHandler, false);
else if (el.attachEvent)
el.attachEvent('onscroll', onScrollEventHandler);
body {
margin:0;
background:lightblue;}
.blue-container {
height:70vh;
}
.blue {
height:40vh;
position:sticky;
width:70%;
top:0;
background:blue;
margin:auto;
}
.pink {
height:500px;
position:relative;
width:70%;
margin-right:auto;
margin-left:auto;
background:pink;
text-align:center;
}
<div class='blue-container'>
<div class='blue'></div>
</div>
<div class='pink'> When I touch the blue bloc, I would like the 'body background' change into an other color (for exemple : orange)</div>
This code is a work in progress. All inputs are welcome.
A: You can use an intersection observer to detect when enough .pink element is inside the viewport touch the .blue element:
const body = document.querySelector('body');
const blue = document.querySelector('.blue');
const target = document.querySelector('.pink');
const getHeight = (el) => el.getBoundingClientRect().height;
// get the threshold in which enough of the pink elment would be inside the viewport to touch the blue element
const threshold = (window.innerHeight - getHeight(blue)) / getHeight(target);
const options = {
rootMargin: '0px',
threshold
};
let prevRatio = 0;
const handleIntersect = (entries, observer) => {
entries.forEach(function(entry) {
// if going up (above the previous threshold & above the threshold
if (entry.intersectionRatio >= threshold && entry.intersectionRatio > prevRatio) {
body.classList.add('body--intersected');
} else {
body.classList.remove('body--intersected');
}
prevRatio = entry.intersectionRatio;
});
}
const observer = new IntersectionObserver(handleIntersect, options);
observer.observe(target);
body {
margin: 0;
background: lightblue;
}
.body--intersected {
background: pink;
}
.blue-container {
height: 70vh;
}
.blue {
height: 40vh;
position: sticky;
width: 70%;
top: 0;
background: blue;
margin: auto;
}
.pink {
height: 500px;
position: relative;
width: 70%;
margin-right: auto;
margin-left: auto;
background: pink;
text-align: center;
}
<div class='blue-container'>
<div class='blue'></div>
</div>
<div class='pink'> When I touch the blue bloc, I would like the 'body background' change into an other color (for exemple : orange)</div> | unknown | |
d19626 | test | I submitted a bug request (request #19180) to the development team, and they confirmed it is a bug.
You can see the entire status here at GitHub dotnet/roslyn.
Pilchie commented 16 hours ago
I can repro that in 15.2, but not 15.3. Moving to compiler based on the stack, >Abut I'm pretty sure this is a dupe. @jcouv?
jcouv commented 16 hours ago
Yes, this is a duplicate (of #17229 and possibly another one too).
It was fixed in dev15.3 (#17544) and we were unfortunately unable to pull the >fix into dev15.2.
Thanks @Matt11 for filing the issue and sorry for the bug.
It seems to be already fixed and will be - as far as I understood - available in the next update. But there is no announced date when it will be included by Microsoft, so I submitted an issue through "Send Feedback/Report a Problem" in Visual Studio 2017.
Notes:
*
*The issue is not limited to TryParse. I verified that it also occurs if you write your own function, i.e. the following sample shows the warning AD0001 as well:
static void Main(string[] args)
{
bool myOutDemo(string str, out int result)
{
result = (str??"").Length;
return result > 0;
}
// discard out parameter
if (myOutDemo("123", out _)) Console.WriteLine("String not empty");
}
*I noticed that there is now a VS Version 15.3 preview available, which should contain the fix mentioned in the GitHub comments. Check out the following link: Visual Studio 2017 Version 15.3 Preview. After installing it, I verified the issue again and can confirm it is fixed there.
Thanks to all who participated in the discussion above! (question comments) | unknown | |
d19627 | test | df1 = {
'Name':['George','Andrea','micheal','maggie','Ravi',
'Xien','Jalpa'],
'Is_Male':[1,0,1,0,1,1,0]}
df1 = pd.DataFrame(df1,columns=['Name','Is_Male'])
Typecast to Categorical column in pandas
df1['Is_Male'] = df1.Is_Male.astype('category') | unknown | |
d19628 | test | If you don't want to use the value in SQL, then you can just store it as a string. It is a string, not a range.
If you want it as a range in the database, then use two columns, a lower bound and an upper bound. I would make the bounds inclusive, so the single value 20 would be represented as 20 rather than 19/21.
Some databases (notably Postgres) do support a range data type, but that is specific to those databases. | unknown | |
d19629 | test | After trying unsuccessfully to add this to Site.css and GridMvc.css:
.cssClassRed
{
background-color:red !important;
}
I ended up adding a new MyStyleSheet.css to the Content folder and placing the above in it. Then I updated the _Layout.cshtml adding to the head section the following:
<link href="@Url.Content("~/Content/MyStyleSheet.css")" rel="stylesheet" type="text/css" />
Once I did these two things, the rows were colored red as I needed. I'm not sure whether what I did was the correct way or the best way, but it worked. So, I thought I would share my solution in case others run into a similar issue as I was. | unknown | |
d19630 | test | You should use a GroupJoin instead of Join, this will yield a set of MerchNoteHist (I'm assuming this is the same table as MerchantNoteHistories here) that match the AppId.
Then, you can use an aggregate (Max) in the resultSelector of the function.
Here's roughly what your code should look like:
_db.MerchApps
.Where(ma => ma.User.Status != ApplicationStatus.Approved)
.GroupJoin(
_db.MerchNoteHist,
ma => ma.Id,
noteList => note.AppId,
(ma, noteList) => new
{
MA = ma,
Note = noteList.OrderByDescending(n=>n.CreatedDate).First()
})
//rest of your logic here
A: Alternatively you could put this in as the second Where clause:
.Where(x => x.Note.CreatedDate== _db.MerchNoteHist
.Where(y => y.AppId == x.Note.AppId).OrderByDescending(y => y.CreatedDate)
.First().CreatedDate) | unknown | |
d19631 | test | Caveat: I haven't tried this, only read the manual.
After all your include lines (including the transitive set of nested includes, and even lines created by text substitution or built by functions), the Make variable MAKEFILE_LIST will reference all your makefiles.
So it should be sufficient to add a dependency to the end of your file such as
%.o: $(MAKEFILE_LIST)
You don't actually need the contents of the effective Makefile, just the list of files that comprise it.
A:
Is there a way to get make to spit out the Makefile after foo.mk has been included?
No.
There's gmake -d with a ton of debug output, some of which indicating which makefiles are being read:
$ gmake -d|grep Reading
Reading makefiles...
Reading makefile `GNUmakefile'...
Reading makefile `foo.mk' (search path) (no ~ expansion)...
This might be helpful if there are recursive include directives or those under conditionals.
Maybe you could tell us your actual problem you want to solve? | unknown | |
d19632 | test | I think what may be happening is that you have a very generic name of the thing ("a note"), when it is expecting to handle specific names which normally don't need an article ("Pixel 4", "Stairway to Heaven").
You may want to try one of these to address the problem:
*
*The thing.name parameter for that BII allows for an inline inventory which is a way of setting allowable values and alises for those values. You can create an inventory item for note that has aliases such as "a note" and "note".
*Consider using the actions.intent.CREATE_DIGITAL_DOCUMENT BII which indicates that it supports phrases such as "Create a note". | unknown | |
d19633 | test | I am not able to figure out what is
wrong Can you guys help me out?
Find a bug and you fix it for a day. Teach how to find bugs and believe me, it takes a lifetime to fix the bugs. :-)
Your fundamental problem is not that the algorithm is wrong -- though, since it gives incorect results, it certainly is wrong. But that's not the fundamental problem. The fundamental problem is that you don't know how to figure out where a program goes wrong. Fix that problem first! Learn how to debug programs.
Being able to spot the defect in a program is an acquired skill like any other -- you've got to learn the basics and then practice for hundreds of hours. So learn the basics.
Start by becoming familiar with the basic functions of your debugger. Make sure that you can step through programs, set breakpoints, examine local variables, and so on.
Then write yourself some debugging tools. They can be slow -- you're only going to use them when debugging. You don't want your debugging tools in the production version of your code.
The first debugging tool I would write is a method that takes a particular Node and produces a comma-separated list of the integers that are in the list starting from that node. So you'd say DumpNode(currentB) and what would come back is, say "{10,20,50,30}". Obviously doing the same for SSL is trivial if you can do it for nodes.
I would also write tools that do things like count nodes in a list, tell you whether a given list is already sorted, and so on.
Now you have something you can type into the watch window to more easily observe the changes to your data structures as they flow by. (There are ways to make the debugger do this rendering automatically, but we're discussing the basics here, so let's keep it simple.)
That will help you understand the flow of data through the program more easily. And that might be enough to find the problem. But maybe not. The best bugs are the ones that identify themselves to you, by waving a big red flag that says "there's a bug over here". The tool that turns hard-to-find bugs into self-identifying bugs is the debug assertion.
When you're writing your algorithm, think "what must be true?" at various points. For example, before AlternateSplitting runs, suppose the list has 10 items. When it is done running, the two resulting lists had better have 5 items each. If they don't, if they have 10 items each or 0 items each or one has 3 and the other has 7, clearly you have a bug somewhere in there. So start writing debug-only code:
public static void AlternateSplitting(SSL src, SSL odd, SSL even)
{
#if DEBUG
int srcCount = CountList(src);
#endif
while (src.Head != null) { blah blah blah }
#if DEBUG
int oddCount = CountList(odd);
int evenCount = CountList(even);
Debug.Assert(CountList(src) == 0);
Debug.Assert(oddCount + evenCount == srcCount);
Debug.Assert(oddCount == evenCount || oddCount == evenCount + 1);
#endif
}
Now AlternateSplitting will do work for you in the debug build to detect bugs in itself. If your bug is because the split is not working out correctly, you'll know immediately when you run it.
Do the same thing to the list merging algorithm -- figure out every point where "I know that X must be true at this point", and then write a Debug.Assert(X) at that point. Then run your test cases. If you have a bug, then the program will tell you and the debugger will take you right to it.
Good luck! | unknown | |
d19634 | test | So in fact you're creating your own tabcontainer? If you really want to do it yourself you should probably need something like this:
require(["dojo/ready", "dojo/on", "dojo/dom-attr", "dojo/dom-style", "dojo/query", "dojo/NodeList-dom"], function(ready, on, domAttr, domStyle, query) {
ready(function() {
query("ul li a").forEach(function(node) {
query(domAttr.get(node, "href")).forEach(function(node) {
domStyle.set(node, "display", "none");
});
on(node, "click", function(e) {
query("ul li a").forEach(function(node) {
if (node == e.target) {
query(domAttr.get(node, "href")).forEach(function(node) {
domStyle.set(node, "display", "block");
});
} else {
query(domAttr.get(node, "href")).forEach(function(node) {
domStyle.set(node, "display", "none");
});
}
});
});
});
});
});
I'm not sure how familiar you are with Dojo, but it uses a query that will loop all links in lists (with the dojo/query and dojo/NodeList-dom modules) (you should provide a classname or something like that to make it easier). Then it will, for each link, retrieve the div corresponding to it and hide it, it will also connect a click event handler to it (with the dojo/on module).
When someone clicks the link, it will (again) loop all the links, but this time it's doing that to determine which node is the target one and which isn't (so it can hide/show the corresponding div).
I made a JSFiddle to show you this. If something is still not clear you should first try to look at the reference guide of Dojo since it really demonstrates the most common uses of most modules.
But since this behavior is quite similar to a TabContainer, I would recommend you to look at the TabContainer reference guide. | unknown | |
d19635 | test | You can do this with the help of a Tally Table.
WITH E1(N) AS( -- 10 ^ 1 = 10 rows
SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N)
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), -- 10 ^ 2 = 100 rows
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), -- 10 ^ 4 = 10,000 rows
CteTally(N) AS(
SELECT TOP(SELECT MAX(Tracks) FROM Product_Asset)
ROW_NUMBER() OVER(ORDER BY(SELECT NULL))
FROM E4
)
SELECT
Id = ROW_NUMBER() OVER(ORDER BY pa.PAId, t.N),
pa.PAId,
TrackNumber = t.N
FROM Product_Asset pa
INNER JOIN CteTally t
ON t.N <= pa.Tracks
ONLINE DEMO
A: Try this,I am not using any Tally Table
declare @Product_Asset table(PAId int,Tracks int)
insert into @Product_Asset values (1 ,2),(2, 3)
;with CTE as
(
select PAId,1 TrackNumber from @Product_Asset
union all
select pa.PAId,TrackNumber+1 from @Product_Asset pa
inner join cte c on pa.PAId=c.PAId
where c.TrackNumber<pa.Tracks
)
select ROW_NUMBER()over(order by paid)id, * from cte
IMHO,Recursive CTE or sub query or using temp table performance depend upon example to example.
I find Recursive CTE more readable and won't use them unless they exhibit performance problem.
I am not convince that Recursive CTE is hidden RBAR.
CTE is just syntax so in theory it is just a subquery
We can take any example to prove that using #Temp table will improve the performance ,that doesn't mean we always use temp table.
Similarly in this example using Tally Table may not improve the performance this do not imply that we should take help of Tally Table at all. | unknown | |
d19636 | test | You can call getText() on the result of element.all() directly:
var toCompare = ["AGO", "9"];
element.all(by.css('.itemField')).getText().then(function (texts) {
for (var i = 0; i < toCompare.length; i++) {
if (texts[i] != toCompare[i]) {
console.log("Values don't match");
}
}
});
Or, you may even expect it like this (not sure if this is what you are actually trying to do):
expect($$('.itemField').getText()).toEqual(toCompare); | unknown | |
d19637 | test | 1st method :
With .split(), you can directly get the number of 'CGG' in your string.
len(fragile_x_test.split('CGG'))-1
It should also be easier to calculate the tandem variable with it.
2nd method :
If you don't need to work on the non 'CGG' part of the string, you can use .count()
fragile_x_test.count('CGG')
3rd method :
This method only counts 1 occurrence for each one or more "CGG" in a row and will add one to tandem variable each time there is more than 5 "CGG" in a row.
example="|CGGCGGCGGCGGCGG|ACTACT|CGGCGGCGGCGGCGGCGGCGG|"
count , repeat , tandem = 0 , 0 , 0
for element in example.split('CGG')[1:-1]:
if element == '':
count+=1
if count==4: tandem+=1
else:
count=0
if count==1:
repeat+=1
print("Number of CGG in a row : ",repeat)
print("Number of CGG tandems : ",tandem)
It will print repeat=2 and tandem=2. | unknown | |
d19638 | test | Seems like you are sending notifications from the Firebase console. These messages are always notification messages. If a notification message has an accompanying data payload then that data is only available to your application if the user taps the notification. If the user never taps the notification then that data will not be available to your app. So you should not use notification messages to send app critical data to your application.
If you send data messages they are always handled by onMessageReceived callback. At that point you can either silently handle the message or display your own notification. So use data messages if you want to be sure that your application has an opportunity to handle the data in the message.
A: So basically you want to store that data ?
if you want to store that data then write your code in OnReceive() method and write Database insertion code in it.
and put one flag in it like "Dismissed=true"
and if user open your notification then make it false and you can get your result.
There is no specific method to detect whether your app's notification is dismissed or not. So you have to manually maintain that data. | unknown | |
d19639 | test | Java 1.5 is the supported version for WebLogic 9.2. It looks like 10.0.3 introduced Java 6 for RH AS 5, 64-bit.
Look here for links to info about each version of WebLogic. From there, you have to choose your operating system and architecture to find out what JVMs are supported.
A: Weblogc 9.2.1 - 9.2.3 doesnot support JDK 1.6, any patch/revision of JDK 1.5 will get supported .Latest on today's date is patch 22 of JDK 1.5. | unknown | |
d19640 | test | I think you're mixing up .NET's regex syntax with PHP's. PHP requires you to use a regex delimiter in addition to the quotes that are required by the C# string literal. For instance, if you want to match "foo" case-insensitively in PHP you would use something like this:
'/foo/i'
...but C# doesn't require the extra regex delimiters, which means it doesn't support the /i style for adding match modifiers (that would have been redundant anyway, since you're also using the RegexOptions.IgnoreCase flag). I think this is what you're looking for:
@"show_name=(.*?)&show_name_exact=true"">(.*?)<"
Note also how I escaped the internal quotation mark using another quotation mark instead of a backslash. You have to do it that way whether you use the old-fashioned string literal syntax or C#'s verbatim strings with the leading '@' (which is highly recommended for writing regexes). That's why you were getting the unterminated string error. | unknown | |
d19641 | test | UPDATE 06/21: now also works for variable products.
It is not necessary to get the postmeta data via get_post_meta because you can access the product object via the $postid.
Once you have the product object, you have access to all kinds of product information.
*
*WooCommerce: Get Product Info (ID, SKU, $) From $product Object
So you get:
// Column header
function filter_manage_edit_product_columns( $columns ) {
// Add column
$columns['discount'] = __( 'Discount', 'woocommerce' );
return $columns;
}
add_filter( 'manage_edit-product_columns', 'filter_manage_edit_product_columns', 10, 1 );
// Column content
function action_manage_product_posts_custom_column( $column, $postid ) {
// Compare
if ( $column == 'discount' ) {
// Get product object
$product = wc_get_product( $postid );
// Is a WC product
if ( is_a( $product, 'WC_Product' ) ) {
// Product is on sale
if ( $product->is_on_sale() ) {
// Output
echo '<ul>';
// Simple products
if ( $product->is_type( 'simple' ) ) {
// Get regular price
$regular_price = $product->get_regular_price();
// Get sale price
$sale_price = $product->get_sale_price();
// Calculate discount percentage
$discount_percentage = ( ( $sale_price - $regular_price ) / $regular_price ) * 100;
// Output
echo '<li>' . abs( number_format( $discount_percentage, 2, '.', '') ) . '%' . '</li>';
// Variable products
} elseif ( $product->is_type( 'variable' ) ) {
foreach( $product->get_visible_children() as $variation_id ) {
// Get product
$variation = wc_get_product( $variation_id );
// Get regular price
$regular_price = $variation->get_regular_price();
// Get sale price
$sale_price = $variation->get_sale_price();
// NOT empty
if ( ! empty ( $sale_price ) ) {
// Get name
$name = $variation->get_name();
// Calculate discount percentage
$discount_percentage = ( ( $sale_price - $regular_price ) / $regular_price ) * 100;
// Output
echo '<li>' . $name . '</li>';
echo '<li>' . abs( number_format( $discount_percentage, 2, '.', '') ) . '%' . '</li>';
}
}
}
// Output
echo '</ul>';
}
}
}
}
add_action( 'manage_product_posts_custom_column', 'action_manage_product_posts_custom_column', 10, 2 ); | unknown | |
d19642 | test | Assuming your string are 8-bit aligned, from 100MB buffer you'll get 100 millions different strings, which can be put into the hash table approximately 800MB in size with constant (O(1)) access time.
This will allow you to make the search as fast as possible, because once you have your 8 byte string, you immediately know where this string was seen in your buffer. | unknown | |
d19643 | test | The first thing you should probably do is fix your regex. You cannot have a range like [a-Z], you can just do [a-z] and use the [NC] (no case) flag. Also, you want this rule at the very end since it'll match requests for /projects which will make it so the rule further down will never get applied. Then, you want to get rid of all your leading slashes. Lastly, you want a boundary for your regex, otherwise it'll match index.php and cause another error.
So:
RewriteEngine on
RewriteRule ^home /index.php?page=home
RewriteRule ^education /index.php?page=home&filter=1
RewriteRule ^skills /index.php?page=home&filter=2
RewriteRule ^projects /index.php?page=home&filter=3
RewriteRule ^experience /index.php?page=home&filter=4
RewriteRule ^([a-z]+)$ /index.php?page=$1 [NC] | unknown | |
d19644 | test | THEY ARE OBSOLETE (at least as separate added files)
They were used in an optional, and (nominally) geographically restricted, patch to versions of Sun-then-Oracle Java before late 2017, to enable (symmetric) ciphers with strength of more than 128 bits, which were disabled in the basic distribution packages in a lingering after-effect of conformity to US export regulations from the 1990s. No such patch was ever needed for OpenJDK, once that was released (which only started after the 'crypto thaw'), but in its early years it wasn't widely supported and sometimes not consistently available, so many people continued using the Oracle/Sun versions -- and many Stack Overflow questions and/or answers (both Stack Overflow and others like security.SX and Super User and Server Fault) were written for that case. Since Stack Overflow doesn't automatically delete or even deprecate old content, it remains available.
For the official details (of the versions that still have any support, even paid) see https://www.oracle.com/java/technologies/javase-jce-all-downloads.html .
IBM has long had its own implementation of Java, especially the cryptographic parts, and had a similar (but not the same) set of policy jars; I don't know if they still do, since I no longer have IBM systems running Java. In any case, Stack Overflow questions and answers for IBM Java are rare.
The 'openness' of the original Sun-centered model for Java was somewhat controversial for years, until finally settled by the establishment of OpenJDK. If you want to discuss that, it probably belongs on https://opensource.stackexchange.com/ . | unknown | |
d19645 | test | Was able to fix this after removing two brackets! Oh, Python. :/
This is the fixed line:
'menuItems' : [{'action' : 'PLAY_VIDEO', 'payload' : 'https://eye-of-the-hawk.appspot.com/static/videos/waterfall.mp4'}], | unknown | |
d19646 | test | Angular.js already have date filter {{20140314 | date}} // Jan 1, 1970 9:35:40 AM
Angular Date Docs
A: This works for me,
.directive('myDate', ['$timeout', '$filter', function ($timeout, $filter)
{
return {
require: 'ngModel',
link: function ($scope, $element, $attrs, $ctrl)
{
var dateFormat = 'mm/dd/yyyy';
$ctrl.$parsers.push(function (viewValue)
{
//convert string input into moment data model
var pDate = Date.parse(viewValue);
if (isNaN(pDate) === false) {
return new Date(pDate);
}
return undefined;
});
$ctrl.$formatters.push(function (modelValue)
{
var pDate = Date.parse(modelValue);
if (isNaN(pDate) === false) {
return $filter('date')(new Date(pDate), dateFormat);
}
return undefined;
});
$element.on('blur', function ()
{
var pDate = Date.parse($ctrl.$modelValue);
if (isNaN(pDate) === true) {
$ctrl.$setViewValue(null);
$ctrl.$render();
} else {
if ($element.val() !== $filter('date')(new Date(pDate), dateFormat)) {
$ctrl.$setViewValue($filter('date')(new Date(pDate), dateFormat));
$ctrl.$render();
}
}
});
$timeout(function ()
{
$element.kendoDatePicker({
format: dateFormat
});
});
}
};
}]) | unknown | |
d19647 | test | To check if an object is of type List(of T) no matter of what type T is, you can use Type.GetGenericTypeDefinition() as in the following example:
Public Sub Foo(obj As Object)
If IsGenericList(obj) Then
...
End If
End Sub
...
Private Function IsGenericList(ByVal obj As Object) As Boolean
Return obj.GetType().IsGenericType _
AndAlso _
obj.GetType().GetGenericTypeDefinition() = GetType(List(Of ))
End Function
Or alternatively as an extension method:
Public Sub Foo(obj As Object)
If obj.IsGenericList() Then
...
End If
End Sub
...
Imports System.Runtime.CompilerServices
Public Module ObjectExtensions
<Extension()> _
Public Function IsGenericList(obj As Object) As Boolean
Return obj.GetType().IsGenericType _
AndAlso _
obj.GetType().GetGenericTypeDefinition() = GetType(List(Of ))
End Function
End Module
A: List(Of Object) and List(Of String) are different types. You can do different things with them - for example, you can't add a Button to a List(Of String).
If you're using .NET 4, you might want to consider checking against IEnumerable(Of Object) instead - then generic covariance will help you.
Do you have any reason not to just make Foo generic itself?
A: It's difficult to tell exactly what you're trying to accomplish. I'm pretty bad at mind reading, so I thought I'd let a few others try answering this question first. It looks like their solutions haven't been the silver bullet you're looking for, so I thought I'd give it a whack.
I suspect that this is actually the syntax you're looking for:
Public Sub Foo(Of T)(ByVal myList As List(Of T))
' Do stuff
End Sub
This makes Foo into a generic method, and ensures that the object you pass in as an argument (in this case, myList) is always a generic list (List(Of T)). You can call any method implemented by a List(Of T) on the myList parameter, because it's guaranteed to be the proper type. There's really no need for anything as complicated as Reflection. | unknown | |
d19648 | test | ITYM an implementation of a LRU algorithm.
*
*How to set up a simple LRU cache using LinkedHashMap
*Simple LRU Caching with Expiration
A: This is a classic application for a queue data structure. A stack of size 10 would keep track of the first 10 elements you added to it, whereas a queue of size 10 would keep track of the 10 most recent elements. If you are looking for a "recent items" list, as suggested by the title of your question, then a queue is the way to go.
edit: Here is an example, for visualization purposes:
Let's say you want to keep track of the most recent 4 items, and you access eight items in the following order:
F, O, R, T, Y, T, W, O
Here is what your data structures would look like over time:
access item queue stack
1 F [ F . . . ] [ . . . F ]
2 O [ O F . . ] [ . . O F ]
3 R [ R O F . ] [ . R O F ]
4 T [ T R O F ] [ T R O F ]
5 Y [ Y T R O ] [ Y R O F ]
6 T [ T Y T R ] [ T R O F ]
7 W [ W T Y T ] [ W R O F ]
8 O [ O W T Y ] [ O R O F ]
Removing the top element from a stack removes the item that was most recently added, not the oldest item.
Removing one element from a queue removes the oldest item, so your queue will automatically hold the most recent items.
edit: with thanks to Adam Jaskiewicz, here is some documentation on Java's Queue implementation. | unknown | |
d19649 | test | Although your code that you're working on is less than desirable, I'll just provide the fix you require at this point:
Change your code to :
OleDbCommand cmd = new OleDbCommand("SELECT ID FROM materials WHERE Type=1", db_def.conn);
OleDbDataReader reader = cmd.ExecuteReader();
int result=0;
if (reader.HasRows)
{
reader.Read();
result = reader.GetInt32(0);
}
strSQL = "UPDATE materials SET ";
strSQL = strSQL + "Dscr = 'concrete', ";
strSQL = strSQL + "width=50 ";
strSQL = strSQL + " WHERE ID=" + result;
objCmd = new OleDbCommand(strSQL, db_def.conn);
objCmd.ExecuteNonQuery();
You need to provide the value of the result variable outside the SQL string - the database will not know the value of 'result' in its own context.
EDIT: the result variable was declared within the if statement, therefore not available further down for assigning.
A: you can try this and tell me if it works
OleDbCommand cmd = new OleDbCommand("SELECT ID FROM materials WHERE Type=1", db_def.conn);
OleDbDataReader reader = cmd.ExecuteReader();
int result=-1 ;
if (reader.HasRows)
{
reader.Read();
result = reader.GetInt32(0);
}
if (result != -1)
{
strSQL = "UPDATE materials SET ";
strSQL = strSQL + "Dscr = 'concrete', ";
strSQL = strSQL + "width=50 ";
strSQL = strSQL + " WHERE ID="+result;
objCmd = new OleDbCommand(strSQL, db_def.conn);
objCmd.ExecuteNonQuery();
} | unknown | |
d19650 | test | Something like this, with dplyr, maybe:
library(dplyr)
UN_match %>% left_join(ETS_match) %>% # join the data
mutate(smaller1.AA = value < emissions, TRUE, FALSE) %>% # add the true false
select(country, country_code, value, year, smaller1.AA) # only useful columns
But all your value are < of emission, so the data are rows are all TRUE in this case.
Improved removing if_else, thanks @Rui Barradas.
A: If the datasets have the same number of rows and all the rows align correctly, you can just do:
ETS_match$smaller1.AA <- ETS_match$value < UN_match$emissions
A: Because the data you're working with are from two different data frames, you have either two options:
*
*Compare them independently
*Combine the data into one data frame.
For 1, you can do something like
ETS_match$smaller1.AA <- ETS_match$value < UN_match$emissions
For 2, you could use merge to bring the data frames together.
How to join (merge) data frames (inner, outer, left, right)? | unknown | |
d19651 | test | http://www.w3.org/wiki/HTML/Elements/section is not the official definition of section. section is defined in the HTML5 specification, which currently is a Candidate Recommendation (which is a snapshot of the Editor’s Draft).
In the CR, section is defined as:
The section element represents a generic section of a document or application. A section, in this context, is a thematic grouping of content, typically with a heading.
section is a sectioning content element (together with article, aside and nav). Those sectioning elements and the headings (h1-h6) create an outline.
The following three examples are semantically equivalent (same meaning, same outline):
<!-- example 1: using headings only -->
<h1>My first day</h1>
<p>…</p>
<h2>Waking up</h2>
<p>…</p>
<h2>The big moment!</h2>
<p>…</p>
<h2>Going to bed</h2>
<p>…</p>
<!-- example 1: using section elements with corresponding heading levels -->
<section>
<h1>My first day</h1>
<p>…</p>
<section>
<h2>Waking up</h2>
<p>…</p>
</section>
<section>
<h2>The big moment!</h2>
<p>…</p>
</section>
<section>
<h2>Going to bed</h2>
<p>…</p>
</section>
</section>
<!-- example 1: using section elements with h1 everywhere -->
<section>
<h1>My first day</h1>
<p>…</p>
<section>
<h1>Waking up</h1>
<p>…</p>
</section>
<section>
<h1>The big moment!</h1>
<p>…</p>
</section>
<section>
<h1>Going to bed</h1>
<p>…</p>
</section>
</section>
So you can use section whenever (*) you use h1-h6. And you use section when you need a separate entry in the outline but can’t (or don’t want to) use a heading.
Also note that header and footer always belong to its nearest ancestor sectioning content (or sectioning root, like body, if there is no sectioning element) element. In other words: each section/article/aside/nav element can have its own header/footer.
article, aside and nav are, so to say, more specific variants of the section element.
two completly different usage cases
These two use-cases are not that different at all. In the "container" case, you could say that section represents a chapter of the document, while in the "chapter" case section represents a chapter of the article/content (which, ouf course, is part of the document).
In the same way, some headings are used to title web page parts (like "Navigation", "User menu", "Comments", etc.), and some headings are used to title content ("My first day", "My favorite books", etc.).
A: OK so here is what I've gathered from authorative sources.
MDN:
The HTML Section Element () represents a generic section of a document, i.e., a thematic grouping of content, typically with a heading.
Usage notes :
If it makes sense to separately syndicate the content of a element, use an element instead.
Do not use the element as a generic container; this is what is for, especially when the sectioning is only for styling purposes. A rule of thumb is that a section should logically appear in the outline of a document.
Shay Howe's guide:
A section is more likely to get confused with a div than an article. As a block level element, section is defined to represent a generic document or application section.
The best way to determine when to use a section versus a div is to look at the actual content at hand. If the block of content could exist as a record within a database and isn’t explicitly needed as a CSS styling hook then the section element is most applicable. Sections should be used to break a page up, providing a natural hierarchy, and most commonly will have a proper heading.
dev.opera.com
Basically, the article element is for standalone pieces of content that would make sense outside the context of the current page, and could be syndicated nicely. Such pieces of content include blog posts, a video and it's transcript, a news story, or a single part of a serial story.
The section element, on the other hand is for breaking the content of a page into different functions or subjects areas, or breaking an article or story up into different sections.
A: <article> and <section> are both sectioning content. You can nest one sectioning element inside another to slice up the outer element into sections.
HTML Living Standard, 4.4.11:
...
Sectioning content elements are always considered subsections of their nearest ancestor sectioning root or their nearest ancestor element of sectioning content, whichever is nearest, regardless of what implied sections other headings may have created.
...
You can consider a <section> as a generic sectioning element. It's like a <div> that defines a section within its closest sectioning parent (or the nearest sectioning root, which may be the <body>).
An <article> is also a section, but it does have some semantics. Namely, it represents content that is self-contained (that is, it could possibly be its own page and it'd still make sense).
A: Here's the official w3c take on section:
http://www.w3.org/wiki/HTML/Elements/section
Quote: "The [section] element represents a generic section of a document or application."
I guess, in theory if you have an article within an article then your nested selections example might work. But, why would you have an article within an article ? Makes little semantic sense. | unknown | |
d19652 | test | You can use while loop here
Scanner scanner = new Scanner(System.in);
boolean status = true;
while (status) { // this runs if status is true
System.out.println("Please enter a number");
int number = scanner.nextInt();
if (number == 1) {
System.out.println("Number is 1");
status=false; // when your condition match stop the loop
} else if (number == 2) {
System.out.println("Number is 2");
status=false;// when your condition match stop the loop
} else{
System.out.println("Invalid selection");
}
}
A: Try this...
int number;
do{
System.out.println("Please enter a number");
number = scan.nextInt();
if(number == 1)
{
System.out.println("Number is 1") ;
}
else if(number == 2)
{
System.out.println("Number is 2") ;
}
else
{
System.out.println("Invalid selection") ;
}
}while(number!=1 && number!=2);
A: I recommend you check if there is an int with Scanner.hasNextInt() before you call Scanner.nextInt(). And, that makes a nice loop test condition if you use it in a while loop.
Scanner scan = new Scanner(System.in);
System.out.println("Please enter a number");
while (scan.hasNextInt()) {
int number = scan.nextInt();
if (number == 1) {
System.out.println("Number is 1");
break;
} else if (number == 2) {
System.out.println("Number is 2");
break;
} else {
System.out.println("Invalid selection");
}
}
// ...
A: @Dosher, reposting @Raj_89's answer with correction in while loop condition. Please notice While loop condition
int number = 0;
do{
System.out.println("Please enter a number");
Scanner scan = new Scanner(System.in);
number = scan.nextInt();
if(number == 1)
{
System.out.println("Number is 1") ;
}
else if(number == 2)
{
System.out.println("Number is 2") ;
}
else
{
System.out.println("Invalid selection") ;
}
}while(number==1 || number==2); | unknown | |
d19653 | test | The following implementation maintain full compatibility with os.path.expandvars, yet allows a greater flexibility through optional parameters:
import os
import re
def expandvars(path, default=None, skip_escaped=False):
"""Expand environment variables of form $var and ${var}.
If parameter 'skip_escaped' is True, all escaped variable references
(i.e. preceded by backslashes) are skipped.
Unknown variables are set to 'default'. If 'default' is None,
they are left unchanged.
"""
def replace_var(m):
return os.environ.get(m.group(2) or m.group(1), m.group(0) if default is None else default)
reVar = (r'(?<!\\)' if skip_escaped else '') + r'\$(\w+|\{([^}]*)\})'
return re.sub(reVar, replace_var, path)
Below are some invocation examples:
>>> expandvars("$SHELL$unknown\$SHELL")
'/bin/bash$unknown\\/bin/bash'
>>> expandvars("$SHELL$unknown\$SHELL", '')
'/bin/bash\\/bin/bash'
>>> expandvars("$SHELL$unknown\$SHELL", '', True)
'/bin/bash\\$SHELL'
A: Try this:
re.sub('\$[A-Za-z_][A-Za-z0-9_]*', '', os.path.expandvars(path))
The regular expression should match any valid variable name, as per this answer, and every match will be substituted with the empty string.
Edit: if you don't want to replace escaped vars (i.e. \$VAR), use a negative lookbehind assertion in the regex:
re.sub(r'(?<!\\)\$[A-Za-z_][A-Za-z0-9_]*', '', os.path.expandvars(path))
(which says the match should not be preceded by \).
Edit 2: let's make this a function:
def expandvars2(path):
return re.sub(r'(?<!\\)\$[A-Za-z_][A-Za-z0-9_]*', '', os.path.expandvars(path))
check the result:
>>> print(expandvars2('$TERM$FOO\$BAR'))
xterm-256color\$BAR
the variable $TERM gets expanded to its value, the nonexisting variable $FOO is expanded to the empty string, and \$BAR is not touched.
A: The alternative solution - as pointed out by @HuStmpHrrr - is that you let bash evaluate your string, so that you don't have to replicate all the wanted bash functionality in python.
Not as efficient as the other solution I gave, but it is very simple, which is also a nice feature :)
>>> from subprocess import check_output
>>> s = '$TERM$FOO\$TERM'
>>> check_output(["bash","-c","echo \"{}\"".format(s)])
b'xterm-256color$TERM\n'
P.S. beware of escaping of " and \: you may want to replace \ with \\ and " with \" in s before calling check_output
A: Here's a solution that uses the original expandvars logic: Temporarily replace os.environ with a proxy object that makes unknown variables empty strings. Note that a defaultdict wouldn't work because os.environ
For your escape issue, you can replace r'\$' with some value that is guaranteed not to be in the string and will not be expanded, then replace it back.
class EnvironProxy(object):
__slots__ = ('_original_environ',)
def __init__(self):
self._original_environ = os.environ
def __enter__(self):
self._original_environ = os.environ
os.environ = self
return self
def __exit__(self, exc_type, exc_val, exc_tb):
os.environ = self._original_environ
def __getitem__(self, item):
try:
return self._original_environ[item]
except KeyError:
return ''
def expandvars(path):
replacer = '\0' # NUL shouldn't be in a file path anyways.
while replacer in path:
replacer *= 2
path = path.replace('\\$', replacer)
with EnvironProxy():
return os.path.expandvars(path).replace(replacer, '$')
A: I have run across the same issue, but I would propose a different and very simple approach.
If we look at the basic meaning of "escape character" (as they started in printer devices), the purpose is to tell the device "do something different with whatever comes next". It is a sort of clutch. In our particular case, the only problem we have is when we have the two characters '\' and '$' in a sequence.
Unfortunately, we do not have control of the standard os.path.expandvars, so that the string is passed lock, stock and barrel. What we can do, however, is to fool the function so that it fails to recognize the '$' in that case! The best way is to replace the $ with some arbitrary "entity" and then to transform it back.
def expandvars(value):
"""
Expand the env variables in a string, respecting the escape sequence \$
"""
DOLLAR = r"\$"
escaped = value.replace(r"\$", r"\%s" % DOLLAR)
return os.path.expandvars(escaped).replace(DOLLAR, "$")
I used the HTML entity, but any reasonably improbable sequence would do (a random sequence might be even better). We might imagine cases where this method would have an unwanted side effect, but they should be so unlikely as to be negligible.
A: I was unhappy with the various answers, needing a little more sophistication to handle more edge cases such as arbitrary numbers of backslashes and ${} style variables, but not wanting to pay the cost of a bash eval. Here is my regex based solution:
#!/bin/python
import re
import os
def expandvars(data,environ=os.environ):
out = ""
regex = r'''
( (?:.*?(?<!\\)) # Match non-variable ending in non-slash
(?:\\\\)* ) # Match 0 or even number of backslash
(?:$|\$ (?: (\w+)|\{(\w+)\} ) ) # Match variable or END
'''
for m in re.finditer(regex, data, re.VERBOSE|re.DOTALL):
this = re.sub(r'\\(.)',lambda x: x.group(1),m.group(1))
v = m.group(2) if m.group(2) else m.group(3)
if v and v in environ:
this += environ[v]
out += this
return out
# Replace with os.environ as desired
envars = { "foo":"bar", "baz":"$Baz" }
tests = { r"foo": r"foo",
r"$foo": r"bar",
r"$$": r"$$", # This could be considered a bug
r"$$foo": r"$bar", # This could be considered a bug
r"\n$foo\r": r"nbarr", # This could be considered a bug
r"$bar": r"",
r"$baz": r"$Baz",
r"bar$foo": r"barbar",
r"$foo$foo": r"barbar",
r"$foobar": r"",
r"$foo bar": r"bar bar",
r"$foo-Bar": r"bar-Bar",
r"$foo_Bar": r"",
r"${foo}bar": r"barbar",
r"baz${foo}bar": r"bazbarbar",
r"foo\$baz": r"foo$baz",
r"foo\\$baz": r"foo\$Baz",
r"\$baz": r"$baz",
r"\\$foo": r"\bar",
r"\\\$foo": r"\$foo",
r"\\\\$foo": r"\\bar",
r"\\\\\$foo": r"\\$foo" }
for t,v in tests.iteritems():
g = expandvars(t,envars)
if v != g:
print "%s -> '%s' != '%s'"%(t,g,v)
print "\n\n"
A: There is a pip package called expandvars which does exactly that.
pip3 install expandvars
from expandvars import expandvars
print(expandvars("$PATH:${HOME:?}/bin:${SOME_UNDEFINED_PATH:-/default/path}"))
# /bin:/sbin:/usr/bin:/usr/sbin:/home/you/bin:/default/path
It has the benefit of implementing default value syntax (i.e., ${VARNAME:-default}). | unknown | |
d19654 | test | The code wall is in tags, so I'm assuming it must be javascript.
Yes, it's clearly printed right there: <script type="text/javascript">.
But I'm confused about why it's presented in such an unreadable/cluttered format. Surely there must be a reason for that.
It's minified, a form of obfuscation which makes JavaScript smaller to download and more difficult to reverse engineer.
I downloaded the source code and looked at the html page...
That probably broke a lot of things. You can't just download a page without downloading all of it's relatively-referenced paths.
So does this mean that any time i see a wall of code like this, ...
No, there's nothing you can tell about the code except that it's
*
*inline
*minified
If my above assumptions are correct, then would learning javascript allow me to fully understand that code wall?
No, nobody writes code that way, and nobody (easily) understands code written that way. A computer compressed/minified the code, and to understand it you need to learn JavaScript, and then unminify the code, which is a far from perfect process. Many forms of minification are "destructive" in that it's impossible to arrive back at the original source code. Human-readable tokens are often turned into single characters, and there is no way to undo this process, the original human-readable names are lost. | unknown | |
d19655 | test | With your shown samples/attempts please try following htaccess rules file.
Please make sure to clear your browser cache before testing your URLs.
RewriteEngine ON
##Rules for uris which starts from affiliate rewrite to affiliate.html here..
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^affiliate/?$ affiliate.html [QSA,NC,L]
##Rules for uris which starts from ref rewrite it to ref.html here..
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^ref ref.html [QSA,NC,L]
##Rules for rest of the non-existing pages to rewrite to index.html here...
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^ index.html [L]
A: Have your rules like this instead:
Options -MultiViews
DirectoryIndex index.html
RewriteEngine On
# Prevent further processing if request maps to a static resource
RewriteRule ^((index|affiliate|ref)\.html)?$ - [L]
RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule . - [L]
# /affiliate -> affiliate.html
RewriteRule ^affiliate$ affiliate.html [L]
# /ref/* -> ref.html
RewriteRule ^ref/ ref.html [L]
# * -> index.html
RewriteRule . index.html [L]
You do not need the RewriteBase directive here. (Or the <IfModule> wrapper.) | unknown | |
d19656 | test | I am assuming that your fields are named in the standard rails way.That is,If your input fields are named like this document_history[name] then the correct way to access them would be params[:document_history][:name].
To verify if they have been copied to your variable you can print to log. | unknown | |
d19657 | test | Note:
This was tested with PowerShell 3.0 and 4.0.
Within nested functions, all child functions have access to all of the parent functions' variables. Any changes to the variables are visible within the local scope of the current function and all the nested child functions called afterwards. When the child function has finished execution, the variables will return to the original values before the child function was called.
In order to apply the variable changes throughout all the nested functions scopes, the variable scope type needs to be changed to AllScope:
Set-Variable -Name varName -Option AllScope
This way, no matter at what level within the nested functions the variable gets modified, the change is persistent even after the child function terminates and the parent will see the new updated value.
Normal behavior of variable scopes within Nested Functions:
function f1 ($f1v1 , $f1v2 )
{
function f2 ()
{
$f2v = 2
$f1v1 = $f2v #local modification visible within this scope and to all its children
f3
"f2 -> f1v2 -- " + $f1v2 #f3's change is not visible here
}
function f3 ()
{
"f3 -> f1v1 -- " + $f1v1 #value reflects the change from f2
$f3v = 3
$f1v2 = $f3v #local assignment will not be visible to f2
"f3 -> f1v2 -- " + $f1v2
}
f2
"f1 -> f1v1 -- " + $f1v1 #the changes from f2 are not visible
"f1 -> f1v2 -- " + $f1v2 #the changes from f3 are not visible
}
f1 1 0
Printout:
f3 -> f1v1 -- 2
f3 -> f1v2 -- 3
f2 -> f1v2 -- 0
f1 -> f1v1 -- 1
f1 -> f1v2 -- 0
Nested Functions with AllScope variables:
function f1($f1v1, $f1v2)
{
Set-Variable -Name f1v1,f1v2 -Option AllScope
function f2()
{
$f2v = 2
$f1v1 = $f2v #modification visible throughout all nested functions
f3
"f2 -> f1v2 -- " + $f1v2 #f3's change is visible here
}
function f3()
{
"f3 -> f1v1 -- " + $f1v1 #value reflects the change from f2
$f3v = 3
$f1v2 = $f3v #assignment visible throughout all nested functions
"f3 -> f1v2 -- " + $f1v2
}
f2
"f1 -> f1v1 -- " + $f1v1 #reflects the changes from f2
"f1 -> f1v2 -- " + $f1v2 #reflects the changes from f3
}
f1 1 0
Printout:
f3 -> f1v1 -- 2
f3 -> f1v2 -- 3
f2 -> f1v2 -- 3
f1 -> f1v1 -- 2
f1 -> f1v2 -- 3 | unknown | |
d19658 | test | It seems there is an inconsistency in the data definition: you define
*
*a as the columns (ranging from 1 to 29)
*b as the rows (ranging from 1 to 4)
*nevertheless, then you refer to a 29 x 4 matrix, while it should be 4 x 29
A part from that, you have to first re-arrange the definition of the input data as follow:
abc=[
1 1 0
2 1 0
3 1 0
4 1 0
5 1 0
6 1 360.389854270598
7 1 524.553377941978
8 1 587.550618428821
...
all the other data
...
]
that is to include them into [].
Then you can:
*
*extract the intensity data (which are in the third column of the abc matrix
*use the reshape function to convert the intensity array into a matrix
*"automatically" the x and y data by using the unique function
*get the number of row and column using the length function
*use the meshgrid function to generate the XY grid over which to plot the surface
At this point you can:
*
*use the surf function to plot a 3D surface (the z values will be the intensity data)
*create flat surface and use the intensity data as "colour"
*use the contour function to plot a 2D contour plot
*use the contour3 function to plot a 3D contour plot
This solution can be implemented as follows (where abc is your complete data set):
% Get the intensity data
intensity=abc(:,3);
% Get the x and y data
row_data=unique(abc(:,1));
col_data=unique(abc(:,2));
n_row=length(row_data);
n_col=length(col_data);
% Reshape the intensity data to get a 29x4 matrix
z=reshape(intensity,n_row,n_col);
% Create the grid to plot the surface
[X,Y]=meshgrid([1:n_col],[1:n_row])
% Plot a 3D surface
figure
surf(X,Y,z)
shading interp
colorbar
% Plot a flat surface with
figure
% Create a "dummy" zeros matrix to plot a flat surface
Z=zeros(size(X));
surf(X,Y,Z,z)
shading interp
colorbar
% Plot a 2D contour
figure
[c,h] = contour(z);
clabel(c,h)
colorbar
% Plot a 3D contour
figure
[c,h] = contour3(z);
clabel(c,h)
colorbar
Hope this helps.
Qapla' | unknown | |
d19659 | test | I figured it out , it's a bit a handful but its worth it. First of all, at the ViewController where the action of sending an email should be called, put an NSNotification, like this :
[self dismissViewControllerAnimated:YES completion:^{
[[NSNotificationCenter defaultCenter] postNotificationName:@"email" object:nil];
}];
Later on, when that view disappears, an original view controller will show up, since it was called through a modal view controller, in that one put :
-(void)createEmail{
MFMailComposeViewController * mc = [[MFMailComposeViewController alloc] init];
NSArray *toRecipents = [NSArray arrayWithObject:@"[email protected]"];
mc.mailComposeDelegate = self;
[mc setSubject:_emailSubject];
[mc setMessageBody:_emailMessage isHTML:NO];
[mc setToRecipients:toRecipents];
[self presentViewController:mc animated:YES completion:nil];
}
And, in the viewDidload, add the following line :
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(createEmail)
name:@"email" object:nil];
That way, we are telling the original view controller, in her did load phase, to listen for a notification called email, and when we call it in the Email view controller, the app will know that a triggered notification has been fired, and will search for her following action, which is the delegate of the MFMailComposeViewController.That way, for any future developer, you will accomplish the designated target | unknown | |
d19660 | test | You get the error because the student is not yet written to de database, so no id has been assigned to the new student instance. If you use collections the data is immediately written to the database. In the model function you can use
$student=Student::Create(['bursary_provider_id' => 1,
'bursary_provider_reference' => 'xxx',
'student_name' => $row[1],
'student_initials' => $row[3],
'student_surname' => $row[2],
'passport_number' => $row[7],
'passport_expiration' => \PhpOffice\PhpSpreadsheet\Shared\Date::excelToDateTimeObject($row[9]),
'country_id' => 7,
'id_number' => $row[6],
'status' => $status,
'notes' => $row[5]]);
This will write the student to the database and generate the neccessary id. I am not sure how this affects performance though.
A: The answer, (and do not ask me why, because documentation is so poor at this stage) is to use collections.
use Illuminate\Support\Collection;
use Maatwebsite\Excel\Concerns\ToCollection;
and instead of ToModel:
public function collection(Collection $rows)
and it doesnt 'return' anything. Other than these changes, using the exact same code in OP works as intended. | unknown | |
d19661 | test | I recommend using D3js, force directed graph is perfect for creating a network visualization, there are some examples here: D3 force directed graph | unknown | |
d19662 | test | you can just using margin:0em auto for making the div page stay on the center. But unfortunately you can't make teh height on the center. Here the fiddle for making width and stay on center.
#page {
position: relative;
width: 90%;
height: 90%;
top: 1em;
margin:0em auto;
bottom: 1em;
}
JSfiddle
You can adjust the height using padding--but you can't make it responsive for the height... so far i know.
A: #page {
position: fixed;
width: 90%;
height: 90%;
top: 1em;
right: 1em;
bottom: 1em;
left: 1em;
margin: auto;
}
Using combination of fixed position and auto margin did the trick. Hope this helps! | unknown | |
d19663 | test | Just curious, have you tried something like this:
hlName.DataNavigateUrlFormatString = "<a href=\"mailto:{0}\">{0}</a>";
or some variation of it?
A: okay sorry you need to switch to a bound field vs. a hyperlinkfield for the mailto. Apparently there is a problem with the ":" in the DataNavigateUrlFormatString.
Reference: http://forums.asp.net/t/1014242.aspx?How+to+create+mailto+in+gridview+
So all you really need to do is
BoundField hlName = new BoundField();
hlName.DataField= dt.Columns[1].ToString();
hlName.DataFormatString= "<a href=\"mailto:{0}\">{0}</a>";
hlName.HtmlEncodeFormatString = false;
That should resolve your problem. | unknown | |
d19664 | test | I would use some conditional aggregation for this as well as a subquery since you want to use the "null" as an actual value.
select
col2 = Isnull(col2, 'Total'),
A = sum(case when col1 = 'A' then 1 else 0 end),
B = sum(case when col1 = 'B' then 1 else 0 end),
NullVal = sum(case when col1 = 'Null' then 1 else 0 end),
count(col1) Total
from
(
select
col1 = Isnull(col1, 'Null'),
col2 = isnull(cast(col2 as varchar(5)), 'Null'),
col3
from yourtable
) d
group by col2
with cube;
See SQL Fiddle with Demo. This uses a CASE expression to get the totals for both the A, B, and null values in Col1 and then uses the CUBE to rollup the totals. Giving the result:
| COL2 | A | B | NULLVAL | TOTAL |
|-------|---|---|---------|-------|
| 1 | 2 | 1 | 1 | 4 |
| 2 | 1 | 1 | 0 | 2 |
| Null | 0 | 1 | 0 | 1 |
| Total | 3 | 3 | 1 | 7 | | unknown | |
d19665 | test | In the class, add a list of strings:
private List<string> _allFiles = new List<string>();
Separate Start() to another method and call that for each of your files. At the end of each process, add the completeText variable to that list as shown below:
public void Start()
{
theSourceFile = new FileInfo(Application.dataPath + "/puzzles.txt");
ProcessFile(theSourceFile);
// set theSourceFile to another file
// call ProcessFile(theSourceFile) again
}
private void ProcessFile(FileInfo file)
{
if (file != null && file.Exists)
reader = file.OpenText();
if (reader == null)
{
Debug.Log("puzzles.txt not found or not readable");
}
else
{
while ((txt = reader.ReadLine()) != null)
{
Debug.Log("-->" + txt);
completeText += txt + "\n";
}
_allFiles.Add(completeText);
}
}
Finally, in your OnGui() method, add a loop around the GUI.Label call.
int i = 1;
foreach(var file in _allFiles)
{
GUI.Label(new Rect(1000, 50 + (i * 400), 400, 400), file);
i++;
}
This assumes that you want the labels to appear vertically, and with no gap between them. | unknown | |
d19666 | test | will db monitoring with pg_stat/pg_statio slow down my other queries?
No. Not significantly, anyway.
1) I was wondering if these monitoring queries would cause my other non-monitoring queries (which I assume also need to write to the pg_statio tables to update these statistics) to lock up.
No. In PostgreSQL reads do not block writes.
2) Is there a way for me to capture database traffic on a postgresql database table so I can maybe replay this traffic on a copy of the database?
Not easily, at the present time.
You can record statements and parameters in the logs, along with a log_line_prefix that lets you reassemble them into transactions and sessions. It'll be painful to parse the logs for this though. Also, IIRC super-long statements can be truncated.
You can in PostgreSQL 9.4 extract the rows changed using logical decoding, but that doesn't tell you which statements changed them, and it doesn't let you reproduce the load. Replaying the change stream is a very different load to creating it in the first place.
Tools like pg_stat_statements can help a bit, but won't give you a verbatim stream of changes. | unknown | |
d19667 | test | You should be able to concatenate everything in the <a> tag like so:
echo '<a href="uploads/' . $row['bildnamn'] . '" rel="gallery" class="pirobox_gall" title="' . $row['uploaded'] . ' ' . $row['user'] . '">';
echo '<img src="uploads/' . $row['thumb_bildnamn'] . '">';
echo '</a>';
I inserted spaces to help emphasize where PHP does concatenation. In your case, a single quote starts/ends the string for PHP; a double quote is ignored and goes into the HTML. So this part:
title="' . $row['uploaded'] . ' ' . $row['user'] . '"
will make the title be the value of the uploaded column, then a space, then the value of the user column. Then just end the a tag with a >.
A: you could continue to concatenate the string
echo '<a href="uploads/'.$row['bildnamn'].'"'. 'rel="gallery" class="pirobox_gall" title="'.$row['uploaded'].' '.$row['user'].'">';
A: Try this:
echo '<a href="uploads/' . $row['bildnamn'] . '" rel="gallery" class="pirobox_gall" title="' . $row['uploaded'] . ' ' . $row['user'] . '">';
echo '<img src="uploads/' . $row['thumb_bildnamn'] . '">';
echo '</a>';
A: You can do this using string concatenation, like this:
$anchor = '<a href="uploads/'.$row['bildnamn'].'"';
$anchor .= 'rel="gallery" class="pirobox_gall" title="' . $row['uploaded'] . ' ' . $row['user'] . '">';
$anchor .= '<img src="uploads/'.$row['thumb_bildnamn'].'"></a>';
echo $anchor; | unknown | |
d19668 | test | I am sorry, but this behavior is not supported for now.
First of all, as cinterop tool produce bindings as a .klib file, it is associated with the separate module. So, it won't help if you somehow will mark them as internal.
The .klib with the bindings is just another source set of the project.
Then, it should be available to connect it with different kinds of dependencies. Now because of some language limitations, one cannot use the implementation dependency kind to connect Kotlin/Native libraries, only api one. But it probably will become available someday.
For now, the best option I can recommend is to name the package as internal or something, to let the consumer know about its practical nature. | unknown | |
d19669 | test | The ActiveReord-Version of uniq seem to ignore the given block, and just check that the objects are uniq. If you look at the source you see that is just sets a flag.
See
http://apidock.com/rails/ActiveRecord/QueryMethods/uniq
You can think of it as a modifier for the generated sql-statement.
A: The result is rigth, since uniq is from Array. To match the uniqueness in SQL idiom you need to use distinct.
From the example from the documentation:
person.pets.select(:name).distinct
# => [#<Pet name: "Fancy-Fancy">]
Try:
Projectuser.select(:id, :user_id).distinct
A: I have it solved by using:
Projectuser.uniq.pluck(:user_id) | unknown | |
d19670 | test | So, my problem was really not knowing enough to know exactly what to ask. The problem here wasn't dataTables or JS or Ajax or any of that. It was Turbolinks in Rails 4. Because it caches to make things seem fast, whenever I'd leave the page and come back to it, I'd have to refresh to get the underlying javascript to initialize.
Turbolinks has an option you can pass into your script to make a document ready on page:restore. That worked. If anyone else runs into this maybe they'll find this helpful. | unknown | |
d19671 | test | Add r before string, filter by boolean indexing and get index values to list:
i = df[df.column1.str.contains(r'\b\d{5}\b')].index.tolist()
print (i)
[0, 2]
Or if want parse only numeric values with length 5 change regex with ^ and $ for start and end of string:
i = df[df.column1.str.contains(r'^\d{5}$')].index.tolist() | unknown | |
d19672 | test | cIt sounds like you want to override the save method of your tblTicket model so that it creates a new LogChanges object. Like this..
class tblTicket(models.Model):
...
def save(self):
super(tblTicket, self).save()
LogChanges.objects.create(prevRemarks=self.remarks) | unknown | |
d19673 | test | It seems that synchronization is not currently practical, because while the docs say that the "timestamp" field signifies when the API gave you the data, it actually does not do what it says because of some issue on their side: https://github.com/spotify/web-api/issues/1073
Instead the timestamp seems to change only when there is a new song starting, pause, or seek. This means that as it is, we cannot know when the API response was generated. | unknown | |
d19674 | test | Suppose two (or more) concurrently-running Java processes need to check for the existence of a file, create it if it doesn't exist, and then potentially read from that file over the course of their runs.
I don't quite understand the create and read part of the question. If you are looking to make sure that you have a unique file then you could use new File(...).createNewFile() and check to make sure that it returns true. To quote from the Javadocs:
Atomically creates a new, empty file named by this abstract pathname if
and only if a file with this name does not yet exist. The check for the
existence of the file and the creation of the file if it does not exist
are a single operation that is atomic with respect to all other
filesystem activities that might affect the file.
This would give you a unique file that only that process (or thread) would then "own". I'm not sure how you were planning on letting the writer know which file to write to however.
If you are talking about creating a unique file that you write do and then moved into a write directory to be consumed then the above should work. You would need to create a unique name in the write directory once you were done as well.
You could use something like the following:
private File getUniqueFile(File dir, String prefix) {
long suffix = System.currentTimeMillis();
while (true) {
File file = new File(dir, prefix + suffix);
// try creating this file, if true then it is unique
if (file.createNewFile()) {
return file;
}
// someone already has that suffix so ++ and try again
suffix++;
}
}
As an alternative, you could also create a unique filename using UUID.randomUUID() or something to generate a unique name. | unknown | |
d19675 | test | Also answered at What every developer should know about time (which includes the referenced screenshots).
With daylight savings time ending today, I thought this was a good time for a post. How to handle time is one of those tricky issues where it is all too easy to get it wrong. So let's dive in. (Note: We learned these lessons when implementing the scheduling system in Windward Reports.)
First off, using UTC (also known as Greenwich Mean Time) is many times not the correct solution. Yet many programmers think if they store everything that way, then they have it covered. (This mistake is why several years ago when Congress changed the start of DST in the U.S. you had to run a hotfix on Outlook for it to adjust reoccurring events.)
So let's start with the key question – what do we mean by time? When a user says they want something to run at 7:00 am, what do they mean? In most cases they mean 7:00 am where they are located – but not always. In some cases, to accurately compare say web server statistics, they want each "day" to end at the same time, unadjusted for DST. At the other end, someone who takes medicine at certain times of the day and has that set in their calendar, will want that to always be on local time so a 3:00pm event is not 3:00am when they have travelled half way around the world.
So we have three main use cases here (there are some others, but they can generally be handled by the following):
1.The same absolute (for lack of a better word) time.
2.The time in a given time zone, shifting when DST goes on/off (including double DST which occurs in some regions).
3.The local time.
The first is trivial to handle – you set it as UTC. By doing this every day of the year will have 24 hours. (Interesting note, UTC only matches the time in Greenwich during standard time. When it is DST there, Greenwich and UTC are not identical.)
The second requires storing a time and a time zone. However, the time zone is the geographical zone, not the present offset (offset is the difference with UTC). In other words, you store "Mountain Time," not "Mountain Standard Time" or "Mountain Daylight Savings Time." So 7:00 am in "Mountain Time" will be 7:00 am in Colorado regardless of the time of year.
The third is similar to the second in that it has a time zone called "Local Time." However, it requires knowing what time zone it is in in order to determine when it occurs.
Outlook now has a means to handle this. Click the Time Zones button:
And you can now set the time zone for each event:
When I have business trips I use this including my flight times departing in one zone and arriving in another. Outlook displays everything in the local timezone and adjusts when that changes. The iPhone on the other hand has no idea this is going on and has everything off when I'm on a trip that is in another timezone (and when you live in Colorado, almost every trip is to another timezone).
Putting it to use
Ok, so how do you handle this? It's actually pretty simple. Every time needs to be stored one of two ways:
1.As UTC. Generally when stored as UTC, you will still set/display it in local time.
2.As a datetime plus a geographical timezone (which can be "local time").
Now the trick is knowing which to use. Here are some general rules. You will need to figure this out for additional use cases, but most do fall in to these categories.
1.When something happened – UTC. This is a singular event and regardless of how the user wants it displayed, when it occurred is unchangeable.
2.When the user selects a timezone of UTC – UTC.
3.An event in the future where the user wants it to occur in a timezone – datetime plus a timezone. Now it might be safe to use UTC if it will occur in the next several months (changing timezones generally have that much warning - although sometimes it's just 8 days), but at some point out you need to do this, so you should do it for all cases. In this case you display what you stored.
4.For a scheduled event, when it will next happen – UTC. This is a performance requirement where you want to be able to get all "next events" where their runtime is before now. Much faster to search against dates than recalculate each one. However, this does need to recalculate all scheduled events regularly in case the rules have changed for an event that runs every quarter.
1.For events that are on "local time" the recalculation should occur anytime the user's timezone changes. And if an event is skipped in the change, it needs to occur immediately.
.NET DateTime
Diving in to .NET, this means we need to be able to get two things which the standard library does not provide:
1.Create a DateTime in any timezone (DateTime only supports your local timezone and UTC).
2.For a given Date, Time, and geographical timezone, get the UTC time. This needs to adjust based on the DST rules for that zone on that date.
Fortunately there's a solution to this. We have open sourced out extensions to the DateTime timezone functionality. You can download WindwardTimeZone here. This uses registry settings in Windows to perform all calculations for each zone and therefore should remain up to date.
Browser pain
The one thing we have not figured out is how to know a user's location if they are using a browser to hit our web application. For most countries the locale can be used to determine the timezone – but not for the U.S. (6 zones), Canada, or Russia (11 zones). So you have to ask a user to set their timezone – and to change it when they travel. If anyone knows of a solution to this, please let me know.
Update: I received the following from Justin Bonnar (thank you):
document.getElementById('timezone_offset').value = new Date().getTimezoneOffset();
Using that plus the suggestion of the geo location for the IP address mentioned below will get you close. But it's not 100%. The time offset does not tell you if you for example if you are in Arizona (they & Hawaii do not observer daylight savings time) vs Pacific/Mountain (depending on DST) time zone. You also depend on javascript being on although that is true for 99% of the users out there today.
The geo location based on IP address is also iffy. I was at a hotel in D.C. when I got a report of our demo download form having a problem. We pre-populate the form with city, state, & country based on the geo of the IP address. It said I was in Cleveland, OH. So again, usually right but not always.
My take is we can use the offset, and for cases where there are multiple timezones with that offset (on that given day), follow up with the geo of the IP address. But I sure wish the powers that be would add a tz= to the header info sent with an HTML request. | unknown | |
d19676 | test | You have a circular dependency, where each header tries to include the other, which is impossible. The result is that one definition ends up before the other, and the name of the second is not available within the first.
Where possible, declare each class rather than including the entire header:
class Example; // not #include "Example.h"
You won't be able to do this if one class actually contains (or inherits from) another; but this will allow the name to be used in many declarations. Since it's impossible for both classes to contain the other, you will be able to do this (or maybe just remove the #include altogether) for at least one of them, which should break the circular dependency and fix the problem.
Also, don't use reserved names like _NAME, and don't pollute the global namespace with using namespace std;
A: see, here you are including #include "Example.h" in Name.h and #include "Name.h" in Example.h. suppose compiler compiles Name.h file first so _NAME is defined now, then it tries to compile Example.h here compiler wants to include Name.h but the content of Name.h will not be included in Example.h since _NAME is already defined, hence class Name is not defined inside Example.h.
you can explicitly do forward declaration of class Name; inside Example.h
A: Try this:
Name.h
#ifndef NAMEH
#define NAMEH
#include <iostream>
#include <string>
using namespace std;
//#include "Example.h"
class Example;
class Name{
};
#endif
Name.cpp
#include "Name.h"
#include "Example.h"
...
Example.h
#ifndef EXAMPLEH
#define EXAMPLEH
#include <iostream>
#include <string>
using namespace std;
//#include "Name.h"
class Name;
class Example{
};
#endif
Example.cpp
#include "Example.h"
#include "Name.h"
... | unknown | |
d19677 | test | Your example breaks with something like:
a = [1,2,7,3,7]
b = [2,1,2,3,7]
c = [1,2,3,1,7]
The sequence should be [1,2,3,7] (if I understand the exercise correctly), but the problem is that the last element of a gets matched to the last elements of b and c, which means that start_b and start_c are set to the last elements and therefore the loops are over. | unknown | |
d19678 | test | This questions shows how to do custom eval of the scripts:
jQuery: Evaluate script in ajax response
In the following piece of code... ** this code is from the answer of the other question ** just got it as a snippet:
$("body").append($(xml).find("html-to-insert").eq(0));
eval($(xml).find("script").text());
eval itself is bound to a window, that you can define to be the context:
windowObject.eval - when calling just eval('...'), it supposes you are calling just like this: window.eval('...')
Now you need to get the window that corresponds to the frame you want to execute the eval in and do something like this:
myIFrameWindow.eval('...')
When you do this, it is execute in the context of that window. It is just a matter of finding the window associated with the iframe that you want now.
To find the window of a given frame, take a look at this post:
Getting IFRAME Window (And Then Document) References With contentWindow | unknown | |
d19679 | test | @Ravi, you can use the following module to migrate data from the old field to the new field, you just need to map the fields. The module contains an example:
https://www.drupal.org/project/migrate_d2d
If the previous option is too complicated, you can do the following trick:
export the old field table as INSERT format from the database, the table name will looks like this:
field_data_field_name
Then, just replace the name table from the old field to the new field table.
Finally, just execute those insert into your database.
This is an example, how an insert statement looks from a drupal field
INSERT INTO field_data_field_name(entity_type,bundle,deleted,entity_id,revision_id,`language`,delta,field_address_type_value,field_address_type_format) VALUES
('entityform','solicitud_facturacion',0,656,656,'und',0,'AVENIDA',NULL);
I hope you understand this! | unknown | |
d19680 | test | This is the solution for Cento6
yum install libXext libXrender fontconfig libfontconfig.so.1
yum install urw-fonts
A: For if anyone interested, I solved my problems by installing true type fonts. After that wkhtmltopdf was able to display these fonts.
Ubuntu (18.04)
apt install fonts-droid-fallback ttf-dejavu fonts-freefont-ttf fonts-liberation ttf-ubuntu-font-family
Alpine Linux (3.9)
apk add ttf-dejavu ttf-droid ttf-freefont ttf-liberation ttf-ubuntu-font-family | unknown | |
d19681 | test | *
*Use .empty()
Description: Remove all child nodes of the set of matched elements from the DOM.
$("#myId").empty()
console.log($("#myId").get(0).outerHTML)
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="myId"><div>aa</div><h3>hellow</h3><a href="#"></a></div>
A: You could use different ways:
// ref: https://api.jquery.com/empty/
$("#myId").empty()
// ref: https://api.jquery.com/html/
$("#myId").html('')
A: you can use remove() method. .empty()
$( "#myId" ).empty()
A: In order to remove or empty all the elements of a container div in this case, you can:
$("#myId").empty();
OR
$("#myId").html('');
Reference link to empty() | unknown | |
d19682 | test | In your JS file, you ara attaching DomContentLoaded event to your code that activates youtube player. So, when your first get result is fired, it shall work fine as DOM is loaded and JS activates. But, in subsequent get calls your dom is not getting loaded, it’s changing.
I believe, you need to call the code which is activating youtube player everytime AJAX call succeeds.
What tou can do is attach your JS code to a custom event and then trigger this even everytime AJAX call succeeds.
In JS:
You can follow this link to see how an event can be created and triggered in JS
How to trigger event in JavaScript?
A: Try re-initializing the function for the youtube embeds as shown below:
function getresult(url) {
$.ajax({
url: url,
type: "GET",
data: {
rowcount: $("#rowcount").val(),
"pagination_setting": $("#pagination-setting").val()
},
beforeSend: function () {
$("#overlay").show();
},
success: function (data) {
$("#pagination-result").html(data);
var div, n,
v = document.getElementsByClassName("youtube-player");
for (n = 0; n < v.length; n++) {
div = document.createElement("div");
div.setAttribute("data-id", v[n].dataset.id);
div.innerHTML = labnolThumb(v[n].dataset.id);
div.onclick = labnolIframe;
v[n].appendChild(div);
}
setInterval(function () {
$("#overlay").hide();
}, 500);
},
error: function () {}
});
}
function changePagination(option) {
if (option != "") {
getresult("pagination/getresult.php");
}
}
function labnolThumb(id) {
var thumb = '<img src="https://i.ytimg.com/vi/ID/hqdefault.jpg">',
play = '<div class="play"></div>';
return thumb.replace("ID", id) + play;
}
function labnolIframe() {
var iframe = document.createElement("iframe");
var embed = "https://www.youtube.com/embed/ID?autoplay=1";
iframe.setAttribute("src", embed.replace("ID", this.dataset.id));
iframe.setAttribute("frameborder", "0");
iframe.setAttribute("allowfullscreen", "1");
this.parentNode.replaceChild(iframe, this);
} | unknown | |
d19683 | test | here is an example of reading from a .json file.
Set up the json file correctly, using the right syntax.
{
"token": "<a token goes here>" //in your local json file - called config.json in this case
}
client.login(config.token) //reads from the config.json file as an obj, which calls the token param
When reading from the json file, you need the fs package
const fs = require('fs');
//assuming your bot has args
let reason = args.slice(1,2) //removes the command and the @
let user = message.mentions.users.first()
//create a new obj
var obj = {
table: []
};
//add some data to it e.g.
obj.table.push({reason: `${reason}`, user:`${user.username}`});
//convert it to json using the json#stringify() method
var json = JSON.stringify(obj);
//write to the json file
fs.writeFile('myjsonfile.json', json, 'utf8', callback);
//if you want to append, do this:
fs.readFile('myjsonfile.json', 'utf8', function readFileCallback(err, data){
if (err){
console.log(err);
} else {
obj = JSON.parse(data); //now it an object
obj.table.push({id: 2, square:3}); //add some data
json = JSON.stringify(obj); //convert it back to json
fs.writeFile('myjsonfile.json', json, 'utf8', callback); // write it back
}});
credit for the above goes to @kailniris here | unknown | |
d19684 | test | items[j].compareTo(items[j]) should be items[j].compareTo(temp), otherwise you're just comparing the item against itself - you need to be comparing it against the object you want to insert.
Then items[j] = temp; will also cause an ArrayIndexOutOfBoundsException because, at the end of the loop, items[j] is smaller than temp, or j == -1, so we need to insert at the position after that - the simplest fix is just changing that to items[j+1] = temp;.
A: Algorithm:
for i ← 1 to length(A)
j ← i
while j > 0 and A[j-1] > A[j]
swap A[j] and A[j-1]
j ← j - 1
Translated to Java:
import java.util.*;
class InsertionSortTest {
public static int[] insertionSort(int[] A) {
for (int i = 1; i < A.length; i++) {
int j = i;
while (j > 0 && A[j-1] > A[j]) {
int t = A[j];
A[j] = A[j-1];
A[j-1] = t;
j--;
}
}
return A;
}
public static void main (String[] args) {
int[] arr = { 5, 3, 0, 2, 1, 4 };
System.out.println(Arrays.toString(insertionSort(arr)));
}
} | unknown | |
d19685 | test | You can alter a field and make it not null without it checking the fields. If you are really concerned about not doing it off hours you can add a constraint to the field which checks to make sure it isn't null instead. This will allow you to use the with no check option, and not have it check each of the 4 million rows to see if it updates.
CREATE TABLE Test
(
T0 INT Not NULL,
T1 INT NUll
)
INSERT INTO Test VALUES(1, NULL) -- Works!
ALTER TABLE Test
WITH NOCHECK
ADD CONSTRAINT N_null_test CHECK (T1 IS NOT NULL)
ALTER COLUMN T1 int NOT NULL
INSERT INTO Test VALUES(1, NULL) -- Doesn't work now!
Really you have two options (added a third one see edit):
*
*Use the constraint which will prevent any new rows from being updated and leave the original ones unaltered.
*Update the rows which are null to something else and then apply the not null alter option. This really should be run in off hours, unless you don't mind processes being locked out of the table.
Depending on your specific scenario, either option might be better for you. I wouldn't pick the option because you have to run it in off hours though. In the long run, the time you spend updating in the middle of the night will be well spent compared the headaches you'll possibly face by taking a short cut to save a couple of hours.
This all being said, if you are going to go with option two you can minimize the amount of work you do in off hours. Since you have to make sure you update the rows to not null before altering the column, you can write a cursor to slowly (relative to doing it all at once)
*
*Go through each row
*Check to see if it is null
*Update it appropriately.
This will take a good while, but it won't lock the whole table block other programs from accessing it. (Don't forget the with(rowlock) table hint!)
EDIT: I just thought of a third option:
You can create a new table with the appropriate columns, and then export the data from the original table to the new one. When this is done, you can then drop the original table and change the name of the new one to be the old one. To do this you'll have to disable the dependencies on the original and set them back up on the new one when you are done, but this process will greatly reduce the amount of work you have to do in the off hours. This is the same approach that sql server uses when you make column ordering changes to tables through the management studio. For this approach, I would do the insert in chunks to make sure that you don't cause undo stress on the system and stop others from accessing it. Then on the off hours, you can drop the original, rename the second, and apply dependencies etc. You'll still have some off hours work, but it will be minuscule compared to the other approach.
Link to using sp_rename.
A: The only way to do this "quickly" (*) that I know of is by
*
*creating a 'shadow' table which has the required layout
*adding a trigger to the source-table so any insert/update/delete operations are copied to the shadow-table (mind to catch any NULL's that might popup!)
*copy all the data from the source to the shadow-table, potentially in smallish chunks (make sure you can handle the already copied data by the trigger(s), make sure the data will fit in the new structure (ISNULL(?) !)
*script out all dependencies from / to other tables
*when all is done, do the following inside an explicit transaction :
*
*get an exclusive table lock on the source-table and one on the shadowtable
*run the scripts to drop dependencies to the source-table
*rename the source-table to something else (eg suffix _old)
*rename the shadow table to the source-table's original name
*run the scripts to create all the dependencies again
You might want to do the last step outside of the transaction as it might take quite a bit of time depending on the amount and size of tables referencing this table, the first steps won't take much time at all
As always, it's probably best to do a test run on a test-server first =)
PS: please do not be tempted to recreate the FK's with NOCHECK, it renders them futile as the optimizer will not trust them nor consider them when building a query plan.
(*: where quickly comes down to : with the least possible downtime)
A: Sorry for the discouragement, but:
*
*Any ways to speed it up: No, not if you want to change the table structure itself
*or am I stuck just doing it overnight during off-hours? Yes, and that's probably for the best, as @HLGEM pointed out
*Also could this cause a table lock? Yes
Not directly relevant to you (because it's about going from NOT NULL to NULL), but interesting read on this topic: http://beyondrelational.com/blogs/sankarreddy/archive/2011/04/05/is-alter-table-alter-column-not-null-to-null-always-expensive.aspx
And finally some ancient history - on an equivalent question in a forum in 2005, the same suggestion was made as @Kevin offered above - using a constraint insteadof making the column itself non-nullable: http://www.sqlteam.com/Forums/topic.asp?TOPIC_ID=50671 | unknown | |
d19686 | test | Knockout-Kendo is set to depend on a kendo module. The easiest thing to do is point kendo at the kendo.web file like: kendo: kendo.web.min (in whatever directory kendo.web.min.js is in). | unknown | |
d19687 | test | It is being a little difficult to realize what is the source of your problem without the classes, although, a guess would be at the Consumptions property. If it is a list (as it seems by its name) it should be mapped with HasMany instead of References.
Besides, maybe you could attach the stack trace with the InnerException. This could give us a clue. | unknown | |
d19688 | test | You can pass an array at second parameter to .reduce(), within .reduce() callback use destructing assignment to get Promise and object passed
return subDeps.reduce(([p, acc], k) =>
[p.then(v => getAllPromises(k, v)).then(v => Object.assign(acc, {[k]:v}))
, acc];
}, [Promise.resolve(val), {}]).shift()//.then(result => /* result : acc */)
A: I'm going out on a bit of a limb, here, in thinking that the whole list of subDeps for any child node can be loaded in parallel. In looking at the problem, deeper, I see no reason for that not to be the case. In fact, the only potential problem I could see is that some value above but not below this point could be a promise, and thus, you might even be able to strike the promises from this particular recursive function...
but...
Here's what I saw as a plausible refactor. Let me know if that's missing some obvious need.
const appendKeyValue = (dict, [key, value]) => {
dict[key] = value;
return dict;
};
const getKeyValuePair = hash => key =>
getRefactoredPromises(hash, hash[key])
.then(value => [key, value]);
const getRefactoredPromises = (someHash, subDeps) => {
return Promise.all(subDeps.map(getKeyValuePair(someHash)))
.then(pairs => pairs.reduce(appendKeyValue, {}));
};
In fact, if I'm right about this refactor, then you don't even need the promises in there. It just becomes:
const appendKeyValue = (dict, [key, value]) => {
dict[key] = value;
return dict;
};
const getKeyValuePair = hash => key =>
[key, getRefactoredHash(hash, hash[key])];
const getRefactoredHash = (someHash, subDeps) =>
subDeps.map(getKeyValuePair(someHash))
.reduce(appendKeyValue, {});
If the root level of this call happens to be a promise, that should be inconsequential at this point, unless there's something I'm missing (it IS 6:20am, and I've yet to close my eyes). | unknown | |
d19689 | test | mainAxisAlignment: MainAxisAlignment.end,
Try deleting this line. | unknown | |
d19690 | test | Are you just looking for a find and replace? If not, can you expand on your question?
stringy = stringy.replace("<Banana><Raffle/>", "<Banana>"+ stringier +"<Raffle/>")
A: This can be another option:
stringy = stringy.substring(0, stringy.indexOf("<Banana>") + "<Banana>".length())
+ stringier
+ stringy.substring(stringy.indexOf("<Banana>") + "<Banana>".length());
A: You can use String.indexOf to find an occurrence of a string within another string. So, maybe it would look something like this.
String stringy = "<Monkey><Banana><Raffle/>";
String stringier = "<Cool stuff to add/>";
String placeToAdd = "<Banana><Raffle/>";
String addAfter = "<Banana>";
int temp;
temp = stringy.indexOf(placeToAdd);
if(temp != -1){
temp = temp+addAfter.length();
stringy = stringy.substring(0, temp) + stringier + stringy.substring(temp+1, stringy.length());
System.out.println(stringy);
} else {
System.out.println("Stringier \'" + stringier+"\" not found");
}
After looking at other answers, replace is probably a better option than the substring stuff.
A: Java strings are immutable, but StringBuilder is not.
public class StringyThingy {
public static void main(String[] args) {
StringBuilder stringy = new StringBuilder("<Monkey><Banana><Raffle/>");
System.out.println(stringy);
// there has to be a more elegant way to find the index, but I'm busy.
stringy.insert(stringy.indexOf("Banana")+"Banana".length()+1,"<Cool stuff to add/>");
System.out.println(stringy);
}
}
// output
<Monkey><Banana><Raffle/>
<Monkey><Banana><Cool stuff to add/><Raffle/> | unknown | |
d19691 | test | Adding to comment, if you want to run function for no of times then just use a counter variable to check no of attempts:
Added a reset button to reset the game.
var counter = 0;
function myF() {
if (counter != 5) {
counter++;
document.getElementById("slotLeft").innerHTML = "Try count: " + counter;
var slotOne = Math.floor(Math.random() * 3) + 1;
var slotTwo = Math.floor(Math.random() * 3) + 1;
var slotThree = Math.floor(Math.random() * 3) + 1;
document.getElementById("slotOne").innerHTML = slotOne;
document.getElementById("slotTwo").innerHTML = slotTwo;
document.getElementById("slotThree").innerHTML = slotThree;
if (slotOne == slotTwo && slotTwo == slotThree) {
document.getElementById("slotOne").style.backgroundColor = "#48bd48";
document.getElementById("slotTwo").style.backgroundColor = "#48bd48";
document.getElementById("slotThree").style.backgroundColor = "#48bd48";
document.getElementById("winner").classList.add("show");
counter = 5; // Edited this line
}
} else {
console.log('Game over');
}
}
function myF1(){
counter = 0;
document.getElementById("slotOne").innerHTML = "";
document.getElementById("slotTwo").innerHTML = "";
document.getElementById("slotThree").innerHTML = "";
}
<button onclick="myF()">Check</button>
<button onclick="myF1()">Restart Game</button>
<div id="slotLeft">
</div>
<div id="slotOne">
</div>
<div id="slotTwo">
</div>
<div id="slotThree">
</div>
<div id="winner">
</div>
A:
function myF() {
var slotOneElem = document.getElementById("slotOne");
var slotTwoElem = document.getElementById("slotTwo");
var slotThreeElem = document.getElementById("slotThree");
var generateRand = function() {
return Math.floor(Math.random() * 3) + 1;
}
return (function () {
var slotOne = generateRand();
var slotTwo = generateRand();
var slotThree = generateRand();
slotOneElem.innerHTML = slotOne;
slotTwoElem.innerHTML = slotTwo;
slotThreeElem.innerHTML = slotThree;
if (slotOne === slotTwo && slotTwo === slotThree) {
slotOneElem.style.backgroundColor = "#48bd48";
slotTwoElem.style.backgroundColor = "#48bd48";
slotThreeElem.style.backgroundColor = "#48bd48";
document.getElementById("winner").classList.add("show");
// Here is the win
return true;
}
return false;
})();
}
var finished = myF();
while (!finished) {
finished = myF();
} | unknown | |
d19692 | test | Let's assume we have model called "mod"
mod <- lm(v1 ~ v2, data= df)
You can use "broom" function.
library(broom)
#create dataframes with results
res_mod <- as.data.frame(tidy(mod))
res_mod2 <- as.data.frame(glance(mod))
#then export as two csvs
write.csv(res_mod, "res_mod.csv")
write.csv(res_mod2, "res_mod2.csv") | unknown | |
d19693 | test | I suggest checking your code. It is missing a choice for St_Out of the state signal. Case statement must cover all possible values. This can be done using the when others => case, but this may not be suitable.
You will also have issues with this code. Your state_logic process is missing many signals. This will lead to simulation synthesis mismatch. You need to put every signal read inside the process inside the sensitivity list for an asynchronous process.
An easy fix for this is to use the VHDL 2008 process(all). This will force the compiler to work what the sensitivity list should be from the code inside the process. | unknown | |
d19694 | test | Here's an example.
#include <stdio.h>
#include <stdbool.h>
#define N 2
#define NO_UNIQUE -1
int find_max_sum(int b[][N])
{
int row_sum, i, j;
int row_max = -1;
bool unique = false;
for (i = 0; i < N; ++i) {
row_sum = 0;
for (j = 0; j < N; ++j)
row_sum += b[i][j];
if (row_max < row_sum) {
row_max = row_sum;
unique = true;
} else if (row_max == row_sum)
unique = false;
}
if (unique)
return row_max;
else {
printf("No unique max.\n");
return NO_UNIQUE;
}
}
int main(void)
{
int b[N][N] = {1, 2, 3, 4};
printf("Max sum is %d\n", find_max_sum(b));
return 0;
}
A: I suggest you to use a third variable (let's call it rowsWithMaxCount) to store the amount of rows with the current max value such that:
*
*if you find a row with a new maximum then rowsWithMaxCount = 1
*if you find a row such that row_max == row_sum then ++rowsWithMaxCount
*otherwise rowsWithMaxCount is unaffected
This will save you from looping the bidimensional array, which is a waste of code since you can obtain all the information you need with a single traversal of the array.
"returning a printf" doesn't make any sense and it's not possible, if you declare the function to return an int then you must return an int. Consider using a special value to signal the caller that there is no unique maximum value. Eg, assuming values are always positive:
static const int NO_UNIQUE_MAX = -1;
int find_max_sum(int b[N][N]) {
...
if (counter > 1)
return NO_UNIQUE_MAX;
...
}
But this will prevent you from returning the not-unique maximum value. If you need to return both then you could declare a new type, for example
struct MaxRowStatus {
int value;
int count;
};
So that you can precisely return both values from the function.
A: You may be over-thinking the function, if I understand what you want correctly. If you simply want to return the row index for the row containing a unique max sum, or print no unique max. if the max sum is non-unique, then you only need a single iteration through the array using a single set of nested loops.
You can even pass a pointer as a parameter to the function to make the max sum available back in your calling function (main() here) along with the index of the row in which it occurs. The easiest way to track the uniqueness is to keep a toggle (0, 1) tracking the state of the sum.
An example would be:
int maxrow (int (*a)[NCOL], size_t n, long *msum)
{
long max = 0;
size_t i, j, idx = 0, u = 1;
for (i = 0; i < n; i++) { /* for each row */
long sum = 0;
for (j = 0; j < NCOL; j++) /* compute row sum */
sum += a[i][j];
if (sum == max) u = 0; /* if dup, unique 0 */
if (sum > max) /* if new max, save idx, u = 1 */
max = sum, idx = i, u = 1;
}
if (u) { /* if unique, update msum, return index */
if (msum) *msum = max;
return idx;
}
fprintf (stderr, "no unique max.\n");
return -1; /* return -1 if non-unique */
}
(note: if you don't care about having the max sum available back in the caller, simply pass NULL for the msum parameter)
A short test program could be the following. Simply uncomment the second row to test the behavior of the function for a non-unique max sum:
#include <stdio.h>
#include <stdlib.h>
enum { NCOL = 7 };
int maxrow (int (*a)[NCOL], size_t n, long *msum)
{
long max = 0;
size_t i, j, idx = 0, u = 1;
for (i = 0; i < n; i++) { /* for each row */
long sum = 0;
for (j = 0; j < NCOL; j++) /* compute row sum */
sum += a[i][j];
if (sum == max) u = 0; /* if dup, unique 0 */
if (sum > max) /* if new max, save idx, u = 1 */
max = sum, idx = i, u = 1;
}
if (u) { /* if unique, update msum, return index */
if (msum) *msum = max;
return idx;
}
fprintf (stderr, "no unique max.\n");
return -1; /* return -1 if non-unique */
}
int main (void) {
int a[][7] = {{ 0, 9, 3, 6, 4, 8, 3 },
/* { 3, 9, 2, 7, 9, 1, 6 }, uncomment for test */
{ 6, 1, 5, 2, 6, 3, 4 },
{ 4, 3, 3, 8, 1, 2, 5 },
{ 3, 9, 2, 7, 9, 1, 6 }},
maxidx;
long sum = 0;
size_t nrow = sizeof a/sizeof *a;
if ((maxidx = maxrow (a, nrow, &sum)) != -1)
printf (" max sum '%ld' occurs at row : %d (0 - indexed).\n",
sum, maxidx);
return 0;
}
Example Use/Output
For the unique sum case:
$ ./array2Drow
max sum '37' occurs at row : 3 (0 - indexed).
non-unique case:
$ ./array2Drow
no unique max.
Look it over and let me know if you have any questions, or if I misinterpreted your needs. | unknown | |
d19695 | test | what am I missing? d['close'] > d['MA']?
Edit: Re, your comments
[...] what I want to return is how many times one element of "close" is > to the matching element of MA . (same tuple index)
sum( pair[0] > pair[1] for pair in zip(d['close'], d['MA']) )
A: From the Python docs:
Tuples and lists are compared lexicographically using comparison of corresponding elements. This means that to compare equal, each element must compare equal and the two sequences must be of the same type and have the same length.
If not equal, the sequences are ordered the same as their first differing elements. For example, cmp([1,2,x], [1,2,y]) returns the same as cmp(x,y). If the corresponding element does not exist, the shorter sequence is ordered first (for example, [1,2] < [1,2,3]).
So as @TokenMacGuy says, you can simply use d['close'] > d['MA'] to compare the respective tuples. | unknown | |
d19696 | test | The relationship between Solidity and Ethereum is something like the relationship between Objective-C and iPhones. The former is a programming language used to write code that runs on the latter.
The actual implementation of the blockchain (the data structure, consensus protocols, etc.) is implemented in other languages (Go in the case of geth, Rust in the case of Parity).
A:
As I study solidity language, I can not figure out how it makes a blockchain in the end
The Solidity programming language serves the sole purpose of allowing for the development of smart contracts on an Ethereum-based blockchain network. These smart contracts are simply the logic (code) that is triggered by internal or external actors (respectively users or other code) within the blockchain network and interpreted and executed by the Ethereum Virtual Machine (EVM).
Solidity is not used to create the Ethereum blockchain network. The blockchain network must already be in existence for the smart contract logic to be deployed to the network. An Ethereum blockchain network can be created by any of the multiple Ethereum clients due to the unambiguous nature of the Ethereum protocols.
I can map a contract to a class; but is it that every instance is a chain or what?
Mappings in Solidity are a basic key/value data structure akin to a hash map. You can map keys of data type X to a value of data type Y. There is no explicit relation between Solidity mappings and the blockchain network itself.
Whenever a smart contract function is called, a transaction is subsequently created on the blockchain network that is representative of the executed function as well as the caller of that function. This transaction is persisted on the blockchain and serves as immutable proof that an action has taken place. | unknown | |
d19697 | test | NSString *name =[dataArray valueForKey:@"name"];
This doesn't do what you think it'll do. valueForKey:, when sent to an array, returns an array of the values corresponding to the given key for all the items in the array. So, that line will assign an array of the "name" values for all the items in dataArray despite the fact that you declared name as a NSString. Same goes for the subsequent lines.
What you probably want instead is:
for (NSManagedObject *item in dataArray) {
NSString *name = [item valueForKey:@"name"];
...
Better, if you have a NSManagedObject subclass -- let's call it Person representing the entity you're requesting, you can say:
for (Person *person in dataArray) {
NSString *name = person.name;
...
which leads to an even simpler version:
for (Person *person in dataArray) {
int response = [network sendName:person.name
withDateOfBirth:person.dob
andGender:person.gender
forID:person.id];
although I'd change the name of that method to leave out the conjunctions and prepositions. -sendName:dateOfBirth:gender:id: is enough, you don't need the "with", "and", and "for." | unknown | |
d19698 | test | Quoting from the article
Step into – An action to take in the debugger. If the line does not contain a function it behaves the same as “step over” but if it
does the debugger will enter the called function and continue
line-by-line debugging there.
Step over – An action to take in the debugger that will step over a given line. If the line contains a function the function will be
executed and the result returned without debugging each line.
So what is happening in your case is debugger is going through function's implementation from the framework or library that you used, which is invoked in your code.
As mentioned in the comments used step over instead of step into, so the debugger will not go through those framework or library source code. | unknown | |
d19699 | test | Yes, you can. I just did. Write this code in your layout file:
<CheckBox
android:layout_width="match_parent"
android:layout_height="match_parent"
android:onClick="click"/>
And in your activity:
public void click(View view)
{
}
A: Yes! it's possible.
To define the click event handler for a checkbox, you should add android:onClick attribute to the element in your XML layout. The value for this attribute must be the name of the method you want to call in response to a click event. The Activity hosting the layout must then implement the corresponding method.
The method you declare in the android:onClick attribute must have a signature exactly as shown above. Specifically, the method must:
1. Be public
2. Return void, and
3. Define a View as its only parameter (this will be the View that was clicked)
Below is the code examples: xml.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<CheckBox android:id="@+id/checkbox_meat"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/meat"
android:onClick="onCheckboxClicked"/>
<CheckBox android:id="@+id/checkbox_cheese"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/cheese"
android:onClick="onCheckboxClicked"/>
</LinearLayout>
The activity method should look as follows:
public void onCheckboxClicked(View view) {
// Is the view now checked?
boolean checked = ((CheckBox) view).isChecked();
// Check which checkbox was clicked
switch(view.getId()) {
case R.id.checkbox_meat:
if (checked)
// Put some meat on the sandwich
else
// Remove the meat
break;
case R.id.checkbox_cheese:
if (checked)
// Cheese me
else
// I'm lactose intolerant
break;
// TODO: Veggie sandwich
}
}
I hope you find this helpful. | unknown | |
d19700 | test | Just looking at the slitslider docs I think you need to add these two lines to the following excerpt of your code:
.slitslider({
// slideshow on / off
autoplay : true,
// time between transitions
interval : 4000,
onBeforeChange : function(slide, pos) {
$nav.removeClass('nav-dot-current');
$nav.eq(pos).addClass('nav-dot-current');
} | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.