_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d9201 | train | Assuming the SqlXml object contains exactly what was mentioned in the question, you might want to use the following helper method. Should work for any type that has been serialized this way, even complex objects.
static T GetValue<T>(SqlXml sqlXml)
{
T value;
// using System.Xml;
using (XmlReader xmlReader = sqlXml.CreateReader())
{
// using System.Xml.Serialization;
XmlSerializer xmlSerializer = new XmlSerializer(typeof(T));
value = (T) xmlSerializer.Deserialize(xmlReader);
}
return value;
}
Example case:
using (MemoryStream stream = new MemoryStream())
using (XmlWriter writer = new XmlTextWriter(stream, Encoding.ASCII))
{
writer.WriteRaw("<int>123</int>");
writer.Flush();
stream.Seek(0, SeekOrigin.Begin);
using (XmlReader reader = new XmlTextReader(stream))
{
SqlXml sqlXml = new SqlXml(reader);
int value = GetValue<Int32>(sqlXml);
Debug.Assert(123 == value);
}
} | unknown | |
d9202 | train | Assuming you have a relatively light workload, having a node that manages graphite-web, grafana, and carbon (which itself manages the whisper database) should be fine.
Then you should have a separate node for your statsd. Each of your machines/applications should have statsd client code that sends your metrics to this statsd node. This statsd node should then forward these metrics onto your carbon node.
For larger workloads that stress a single node, you'll need to either scale vertically (get more powerful node to host your carbons/statsd instances), or start clustering those services.
Carbon clusters tend to use some kind of relay that you send to that manages forwarding those metrics to the cluster (usually using consistent hashing). You could use a similar setup to consistently hash metrics to a cluster of statsd servers. | unknown | |
d9203 | train | You have an anonymous function as an argument to CONTRACT.name1. By the signature of this anonymous function, name1 seems to be asynchronous.
As a result, the call to name1 will return immediately, while the work that is supposed to be done by name1 will execute later in the event loop (or it is waiting for IO). As a result, stage would always be undefined.
What you want to do is this:
test () {
let that = this;
CONTRACT.name1(function(err, res) {
stage = (res);
alert(stage)
that.stageIs = stage;
});
}
There are two things happening here. We are assigning a value to stage when the callback is invoked by the asynchronous function name1. We are also assigning a reference to this inside test, because the anonymous function will execute in a different context (and therefore the anonymous function's this would point to something else).
Or you could also use arrow functions to get rid of the this binding problem.
test () {
CONTRACT.name1((err, res) => {
stage = (res);
alert(stage)
this.stageIs = stage;
});
} | unknown | |
d9204 | train | You'll need to use [] notation to get at the property.
So:
const displayName = newValue["display name"]; | unknown | |
d9205 | train | It's because you're referencing the global name t. By the time the sleeps end, the loop is over, and t remains bound to the last thread (the 10th thread) the loop created.
In your alternative, the results aren't actually defined. There you reference the global t while the loop is still running, so it's likely to be bound to the most recent thread created - but doesn't have to be.
Note: if you don't have convenient access to the thread object currently running, you can use
threading.currentThread()
to get it. Then
threading.currentThread().getName()
will return the name of the thread running it.
A: Your function queries for a t, but no t is defined in the function:
def count():
global NUM
NUM += 1
name = t.getName() # use outer t
time.sleep(1)
print(name+":"+"NUM is "+str(NUM))
The fallback mechanism of Python thus will look for a t in the direct outer scope. And indeed, you assign to a t in the outer scope, so it will take that value.
Now since you write t = ... in a for loop, that t changes rapidly. It is very likely that the for loop has already reached the last value - especially because of the threading mechanism of Python - before the first thread actually fetches that t. As a result, all the threads fetch the t referring to the last constructed thread.
If we however rewrite the function to:
NUM = 0
def count():
name = t.getName() # t fetched immediately
global NUM
NUM += 1
time.sleep(1)
print(name+":"+"NUM is "+str(NUM))
I get:
Thread-11:NUM is 10
Thread-12:NUM is 10
Thread-13:NUM is 10
Thread-14:NUM is 10
Thread-15:NUM is 10
Thread-17:NUM is 10
Thread-16:NUM is 10
Thread-19:NUM is 10
Thread-18:NUM is 10
Thread-10:NUM is 10
on my machine. Of course this does not guarantee that every thread will grab the correct thread, since it is possible that only later in the process, the thread will start working and fetches the t variable.
A: You have the same problem for both the thread name and the count NUM: by the time you get to the first print statement, all 10 threads have been started. You have only one global variable t for the thread, and one global NUM for the count. Thus, all you see is the last value, that for the 10th thread. If you want separate values printed, you need to supply your code with a mechanism to report them as they're launched, or keep a list through which you can iterate.
A: I advice you to try this:
NUM = 0
def count():
global NUM
NUM += 1
num = NUM
name = t.getName()
time.sleep(1)
print("t.getName: " + t.getName() + ", name: " + name + ":" + ", NUM: " + str(NUM) + ", num: " + str(num))
for i in range(10):
t = threading.Thread(target=count)
t.start()
Result:
t.getName: Thread-10, name: Thread-10:, NUM: 10, num: 10
t.getName: Thread-10, name: Thread-6:, NUM: 10, num: 6
t.getName: Thread-10, name: Thread-3:, NUM: 10, num: 3
t.getName: Thread-10, name: Thread-5:, NUM: 10, num: 5
t.getName: Thread-10, name: Thread-4:, NUM: 10, num: 4
t.getName: Thread-10, name: Thread-9:, NUM: 10, num: 9
t.getName: Thread-10, name: Thread-7:, NUM: 10, num: 7
t.getName: Thread-10, name: Thread-2:, NUM: 10, num: 2
t.getName: Thread-10, name: Thread-1:, NUM: 10, num: 1
t.getName: Thread-10, name: Thread-8:, NUM: 10, num: 8
t.getName() is a function call, which is a reference. By the time the print functions reached the console, t references to the latest thread. | unknown | |
d9206 | train | Whether to keep it or not is a personal choice. I sometimes do, but fewer LOC makes for cleaner code. To remove it you have several options. You can leave the respond_to as is and just remove the html eg:
def destroy
@comment.destroy
respond_to do |format|
format.json { head :no_content }
end
end
but you can also remove the respond_to from each action (even fewer LOC) with something like this:
# put this LOC at the top of your controller, outside of any action
respond_with :json
# then each action is much simpler... you just assume it's always json
def destroy
@comment.destroy
head :no_content
end | unknown | |
d9207 | train | JMeter has a random variable configuration element for HTTP Request sampling.
A: You can create redirect.php which will contain anything you want. Remember, redirect.php itself will create additional load.
<?
$queries = array('query1', 'query2');
$query = $queries[rand(0, count($queries)-1)]
header('Location: http://search.site.com/?q='.urlencode( $query )); | unknown | |
d9208 | train | Figured this out.
The key is within class DatasetReviewsView(DetailView) in views.py. I first needed to change this inheritance to a ListView to enable what I was looking for.
Next, I just needed to provide context for what I wanted to show in my html page. This is easily done by overriding the get_context_data function, which provides the context data for templates displaying this class based view.
It makes it very easy to leverage python to provide the information I want. I could query the id of the dataset I was looking at with id = self.kwargs['pk'] (which must be included in the function parameters for get_context_data), and then I could just filter all reviews to just those with a dataset matching this id. Within the html I could then iterate over the variable num_reviews.
There is some other code that averages all the ratings to provide an overall rating as well.
class DatasetReviewsView(ListView):
model = DatasetReview
context_object_name = 'reviews'
success_url = '/'
def get_context_data(self, **kwargs):
id = self.kwargs['pk']
context = super(DatasetReviewsView, self).get_context_data(**kwargs)
context['id'] = id
context['name'] = Dataset.objects.get(pk=id).title
context['num_reviews'] = len(DatasetReview.objects.filter(dataset=id))
tot = 0
for review in DatasetReview.objects.filter(dataset=id):
tot += review.rating
context['avg_rating'] = tot / context['num_reviews']
return context | unknown | |
d9209 | train | The keys are identical, only their formats differ:
*
*The input BAlW... is an uncompressed public EC key (Base64 encoded).
*The output MFkw... is an ASN.1/DER encoded key in X.509/SPKI format (Base64) encoded.
This can be easily verified by encoding both keys not in Base64 but in hex:
input : 0409565aee3a8a5fe5cba03177fa9c9668445611ad87ddd0379fa1dc904910a12684a7752dfaa4a42a97d5c4f57100ecf673eaf4bab4fc5b598ad923afb46a77de
output: 3059301306072a8648ce3d020106082a8648ce3d0301070342000409565aee3a8a5fe5cba03177fa9c9668445611ad87ddd0379fa1dc904910a12684a7752dfaa4a42a97d5c4f57100ecf673eaf4bab4fc5b598ad923afb46a77de
As can be seen, the ASN.1/DER encoded X.509/SPKI key contains the uncompressed public key at the end (the last 65 bytes).
Background:
Keep in mind that a public EC key is a point (x, y) on an EC curve (obtained by multiplying the private key by the generator point) and that there are different formats for its representation, e.g. the following two:
*
*The uncompressed format, which corresponds to the concatenation of a 0x04 byte, and the x and y coordinates of the point: 04|x|y.
For the secp256r1 curve (aka prime256v1 aka NIST P-256), x and y are both 32 bytes, so the uncompressed key is 65 bytes.
*The X.509/SPKI format as defined in RFC 5280. This format is described with ASN.1 and serialized/encoded with DER (s. here).
PublicKey#getEncoded() returns the ASN.1/DER encoded X.509/SPKI key. With an ASN.1/DER parser the ASN.1/DER can be decoded, e.g. https://lapo.it/asn1js/. | unknown | |
d9210 | train | You can use different viewTypes that means that you can use different types of layout for different types of view in your listview. Simply ovverride getViewType(int pos) in your adapter. And you can set different layouts in you getView like this
if (getItemViewType(position) == VIEW_TYPE_LEFT) {
convertView = inflater.inflate(R.layout.image_bubble_left, null);
holder.drawable = (NinePatchDrawable) convertView.findViewById(R.id.bubble_left_layout).getBackground();
holder.bubbleLayout = (RelativeLayout)convertView.findViewById(R.id.bubble_left_layout);
} else {
convertView = inflater.inflate(R.layout.image_bubble_right, null);
holder.drawable = (NinePatchDrawable) convertView.findViewById(R.id.bubble_right_layout).getBackground();
holder.bubbleLayout = (RelativeLayout)convertView.findViewById(R.id.bubble_right_layout);
}
But have you considered using a tablelayout?
A: i think this link will help you. Here is a copy paste example. It just works. Is a listview with headers and multiple views in each row.
Give it a try.
http://w2davids.wordpress.com/android-sectioned-headers-in-listviews/ | unknown | |
d9211 | train | I think this is the closest available, using .andSelf():
var items = $(this).closest('tr').next(':not([id])').andSelf();
This goes to the .next() call, but then adds the tr back to it. Everything is in the context of where the chain occurs, so either jumping around to add elements, or storing the original reference as you have it is as close as is currently available. These are the helpful functions for jumping around: http://api.jquery.com/category/traversing/miscellaneous-traversal/
John Resig posted this concept a while back, but to my knowledge, nothing closer to that has made it into jQuery core. You can however use the code posted on the blog there if you want.
A: Possibly andSelf() http://api.jquery.com/andSelf/
This returns the previous selection, but you would then have to use :not([id]) on everything.. | unknown | |
d9212 | train | You can make use of bridecall from the js to native code to trigger the UIAlertView.. | unknown | |
d9213 | train | Sure, just compress it and include it in a directory under your APK (e.g. raw). But be aware that your APK will at least x2 its size. | unknown | |
d9214 | train | printf("%.3f", value) should work for you | unknown | |
d9215 | train | I cannot reproduce it on my system (bash 3.2.57, Java 1.8.0.65)
You can try working it around as follows:
*
*Surround cliend id with quotation marks like
-Jclient_id="450a-b58d-204ebfe22d1e"
*Define the values in user.properties file (lives under "bin" folder of your JMeter installation) like:
users=1
loops=1
client_id=450a-b58d-204ebfe22d1e
See Apache JMeter Properties Customization Guide for more information on different JMeter properties types and ways of working with them | unknown | |
d9216 | train | The answer is simple, and not "I'm an idiot" although there's an argument for that.
The issue I am experiencing is that I use Scenario: as opposed to Scenario Outline:
Making that simple change removes the error and allows me to run my test | unknown | |
d9217 | train | As suggested in the comments, I think you are better off using the private memory storage for the actual images. This will have better speed then storing BLOBs in SQLite.
If you still need to keep a DB, for example for complex image searches or such, I suggest you just replace the BLOB field in your DB with a string with the actual location of the image file.
Another solution is to keep the images as app assets, but this assumes the images are always the same and can't change dynamically, and I doubt this is your use case. | unknown | |
d9218 | train | w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Add("Access-Control-Allow-Headers", "Content-Type")
w.Header().Set("content-type", "application/json")
You can try to add them in the handleFunc | unknown | |
d9219 | train | I was looking into this, too. I will share my findings.
According to this comment from the React Native team back in 2015, the team doesn't have resources to support it, yet.
Right now, we're focused on normal iOS and Android. We still a very small team and don't have the resources to target a different support right now. However, we open sourced React Native in the hope that we get help from the community to build those :)
Someone tried to build one with a lot of reverse engineering, but there are still unsolved issues causing crashes. | unknown | |
d9220 | train | A', 20)
Add a bold format to use to highlight cells.
bold = workbook.add_format({'bold': True})
Write some simple text.
worksheet.write('A1', 'RT')
workbook.close()
But can't get any of my data to show up.
import random, math
num_features = 20
stim_to_vect = {}
all_stim = [1,2,3,4,5]
all_features = range(num_features)
zeros=[0 for i in all_stim]
memory=[]
def snoc(xs,x):
new_xs=xs.copy()
new_xs.append(x)
return new_xs
def concat(xss):
new_xs = []
for xs in xss:
new_xs.extend(xs)
return new_xs
def point_wise_mul(xs,ys):
return [x*y for x,y in zip(xs,ys)]
for s in snoc(all_stim, 0):
stim_to_vect[s]= []
for i in all_features:
stim_to_vect[s].append(random.choice([-1, 1]))
def similarity(x,y):
return(math.fsum(point_wise_mul(x,y))/math.sqrt(math.fsum(point_wise_mul(x,x))*math.fsum(point_wise_mul(y,y))))
def echo(probe,power):
echo_vect=[]
for j in all_features:
total=0
for i in range(len(memory)):
total+=math.pow(similarity(probe, memory[i]),power)*memory[i][j]
echo_vect.append(total)
return echo_vect
fixed_seq=[1,5,3,4,2,1,3,5,4,2,5,1]
prev_states={}
prev_states[0]=[]
prev=0
for curr in fixed_seq:
if curr not in prev_states.keys():
prev_states[curr]=[]
prev_states[curr].append(prev)
prev=curr
def update_memory(learning_parameter,event):
memory.append([i if random.random() <= learning_parameter else 0 for i in event])
for i in snoc(all_stim,0):
for j in prev_states[i]:
curr_stim = stim_to_vect[i]
prev_resp = stim_to_vect[j]
curr_resp = stim_to_vect[i]
update_memory(1.0, concat([curr_stim, prev_resp, curr_resp]))
def first_part(x):
return x[:2*num_features-1]
def second_part(x):
return x[2*num_features:]
def compare(curr_stim, prev_resp):
for power in range(1,10):
probe=concat([curr_stim,prev_resp,zeros])
theEcho=echo(probe,power)
if similarity(first_part(probe),first_part(theEcho))>0.97:
curr_resp=second_part(theEcho)
return power,curr_resp
return 10,zeros
def block_trial(sequence):
all_powers=[]
prev_resp = stim_to_vect[0]
for i in sequence:
curr_stim = stim_to_vect[i]
power,curr_resp=compare(curr_stim,prev_resp)
update_memory(0.7,concat([curr_stim,prev_resp,curr_resp]))
all_powers.append(power)
prev_resp=curr_resp
return all_powers | unknown | |
d9221 | train | The BundlePath is not a writable area for iOS applications.
From the Xamarin notes at http://developer.xamarin.com/guides/ios/application_fundamentals/working_with_the_file_system/
The following snippet will create a file into the writable documents area.
var documents =
Environment.GetFolderPath (Environment.SpecialFolder.MyDocuments); // iOS 7 and earlier
var filename = Path.Combine (documents, "Write.txt");
File.WriteAllText(filename, "Write this text into a file");
There's a note on the page for iOS 8 that a change is required to get the documents folder
var documents = NSFileManager.DefaultManager.GetUrls (NSSearchPathDirectory.DocumentDirectory,
NSSearchPathDomain.User) [0];
A: Assuming you're on iOS 8, the documents directory isn't connected to the bundle path. Use the function NSSearchPathForDirectoriesInDomains() (or URLsForDirectory:inDomains:) to find the documents directory.
A: I assume it should be a Directory.Exists() where you check if the directory exists:
if(!File.Exists(documentsFolder)){ | unknown | |
d9222 | train | For kafka 10 you need to invoke the kafka-reassign-partitions.sh script to change the replication factor of the topic.
Here is a demo of a script that will display the topic before and after the change:
updateTopicReplication() {
TOPIC_NAME=$1
REPLICAS=$2
echo "****************************************"
echo "describe $TOPIC_NAME"
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic $TOPIC_NAME
JSON_FILE=./configChanges/${TOPIC_NAME}-update.json
echo $JSON_FILE
[ -e $JSON_FILE ] && rm $JSON_FILE
touch $JSON_FILE
echo -e "{\"version\":1, \"partitions\":[{\"topic\":\"${TOPIC_NAME}\",\"partition\":0,\"replicas\":[${REPLICAS}]}]}" >> $JSON_FILE
echo "****************************************"
echo "updateing $TOPIC_NAME"
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file $JSON_FILE --execute
echo "****************************************"
echo "describe $TOPIC_NAME"
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic $TOPIC_NAME
}
A: It seems that kafka actually doesn't support increasing (or decreasing) the replication factor for a topic, according to the same docs I mentioned in the question. | unknown | |
d9223 | train | you can ethier do this after executing your query:
columns = [i[0] for i in cursor.description]
so you get
query = """select * from """ .format(line_name)
tmp = cursor.execute(query)
columns = [i[0] for i in cursor.description]
results = tmp.fetchall()
and then do:
if results:
myFile = csv.writer(csv_file)
myFile.writerow(columns)
myFile.writerows(results)
or you can convert result to a dictionary and use DictWriter witch accepts fieldnames | unknown | |
d9224 | train | There is another way. You can use history.push in your code:
import { useHistory } from 'react-router-dom';
const YourComponent = () => {
const history = useHistory();
return <button onClick={() => history.push('/profile')}>Profile</button>;
}; | unknown | |
d9225 | train | It's hard to build Chromium from source but it's every easy to build a new browser based on Chromium.
With GitHub Electron, any app based on it is a real Chromium browser, and developers may create custom UI for their Electron-based browsers.
However, modern browsers would require having servers for storing users' sync data, this might not be easily affordable for 1-man development army to create a new browser for the mass.
Electron app is a pack of Chromium sandbox, and Node.js backend, the 2 communicate thru' IPC as Node.js will be able to access all resources.
GitHub Electron:
https://electronjs.org | unknown | |
d9226 | train | I would stick to using REST api calls, but faking the update on the start of a redux's action, doing nothing except maybe adding a proper id to your object on success, and reverting back the state on failure only.
Your reducer would kinda look like this in case of a create item action :
export default (state = {}, action) => {
switch (action.type) {
case ActionsTypes.CREATE_START:
// make the temporary changes
case ActionsTypes.CREATE_SUCCESS:
// update the previous temporary id with a proper id
case ActionsTypes.CREATE_FAILURE:
// delete the temporary changes
default:
return state;
}
};
And your actions like this :
const ActionLoading = item => ({
type: ActionsTypes.CREATE_START,
item,
});
const ActionSuccess = item => ({
type: ActionsTypes.CREATE_SUCCESS,
item ,
createdItem,
});
const ActionFailure = item => ({
type: ActionsTypes.CREATE_FAILURE,
item,
});
export default todo => (dispatch) => {
dispatch(ActionLoading(item)); // add the item to your state
const updatedTodo = await TodoClient.create(todo)
if (updatedTodo) {
dispatch(ActionSuccess(item, createdItem)) // update the temp item with a real id
} else {
dispatch(ActionFailure(item)) // remove the temporary item
}
};
It is mandatory that you give temporary ids to the data you are managing for performace's sake and to let react key properly the items rendered in maps. I personally use lodash's uniqueId.
You'll have also to implement this behavior for updates and removals but it's basically the same:
*
*store the changes, update your object without waiting for the api and
revert the changes on failure.
*remove your object without waiting for the api and pop it back on failure.
This will give a real time feel as everything will be updated instantly and only reverted on unmanaged errors. If you trust your backend enough, this is the way to go.
EDIT : Just saw your update, you can stick to this way of mutating the data (or to the normal way of updating the state on success) but to avoid too much complexity and rendering time, make sure you store your data using keyBy. Once your items are stored as an array of object keyed by their id, you will be able to add, remove and modify them with a O(1) complexity. Also, react will understand that it doesn't need to re-render the whole list but only the single updated item. | unknown | |
d9227 | train | You could assign the name to the result using setNames :
result <- purrr::map2(
.x = c(1, 3),
.y = c(10, 20),
function(.x, .y)rnorm(1, .x, .y)
) %>%
setNames(paste0('model', seq_along(.)))
Now you can access each individual objects like :
result$model1
#[1] 6.032297
If you want them as separate objects and not a part of a list you can use list2env.
list2env(result, .GlobalEnv) | unknown | |
d9228 | train | I've just had the same problem. Even though the post is older, it might be interesting to someone else. honk's answer is in principle correct, it's just not immediate to see how it affects the implementation of the algorithm. From the Wikipedia article for Expectation Maximization and a very nice Tutorial, the changes can be derived easily.
If $v_i$ is the weight of the i-th sample, the algorithm from the tutorial (see end of Section 6.2.) changes so that the $gamma_{ij}$ is multiplied by that weighting factor.
For the calculation of the new weights $w_j$, $n_j$ has to be divided by the sum of the weights $\sum_{i=1}^{n} v_i$ instead of just n. That's it...
A: You can calculate a weighted log-Likelihood function; just multiply the every point with it's weight. Note that you need to use the log-Likelihood function for this.
So your problem reduces to minimizing $-\ln L = \sum_i w_i \ln f(x_i|q)$ (see the Wikipedia article for the original form).
A: Just a suggestion as no other answers are sent.
You could use the normal EM with GMM (OpenCV for ex. has many wrappers for many languages) and put some points twice in the cluster you want to have "more weight". That way the EM would consider those points more important. You can remove the extra points later if it does matter.
Otherwise I think this goes quite extreme mathematics unless you have strong background in advanced statistics.
A: I was looking for a similar solution related to gaussian kernel estimation (instead of a gaussian mixture) of the distribution.
The standard gaussian_kde does not allow that but I found a python implementation of a modified version here
http://mail.scipy.org/pipermail/scipy-user/2013-May/034580.html
A: I think this analysis can be possibly be done via the pomegranate (see Pomegranate docs page) that supports a weighted Gaussian Mixture Modeling.
According to their doc:
weights : array-like, shape (n_samples,), optional
The initial weights of each sample in the matrix. If nothing is
passed in then each sample is assumed to be the same weight.
Default is None.
Here is a Python snippet I wrote that can possibly help you do a weighted GMM:
from pomegranate import *
import numpy as np
# Generate some data
N = 200
X_vals= np.random.normal(-17, 0.9, N).reshape(-1,1) # Needs to be in Nx1 shape
X_weights = w_function(X_vals) # Needs to be in 1xN shape or alternatively just feed in the weight data you have
pmg_model = GeneralMixtureModel.from_samples([NormalDistribution], 2, X_vals, weights=X_weights.T[0])
[Figure] Observed versus weighted distribution of the data we are analyzing
[Figure] GMM of the weighted data | unknown | |
d9229 | train | Is there any way how to automatize this, so I will have correct hash in R1 description? Something like hash predicting?
The short answer is no.
The longer answer is still no, but you might not need to predict the hash. The issue here is that you are copying some fix commit—let's call this F—from master to another branch. Let's call this -x cherry-picked copy commit Fx. You may also end up copying the fix commit to a new fix commit because you are avoiding using git merge in this work-flow, so if master has acquired new commits, you use rebase to cherry-pick F into a new commit F' that you will add to master, and now you want to replace your Fx—your cherry-picked copy of F—with a cherry-picked copy of F'.
So, you can just do that. If you rebase commit F to make F', strip Fx from the other branch and re-run git cherry-pick -x to copy F' to Fx'. You already know which commits these are, because you have the original hash ID of F and cherry-picking it (via rebase) produces F'; and you have F's hash ID in Fx. The drawback is that this re-copies any commits after Fx on the other branch, since "strip Fx from the other branch" can be nontrivial.
(An alterative approach, one that avoids all this fussing-about, is to merge the fix into both branches. See How to do a partial merge in git? and the linked blog articles.) | unknown | |
d9230 | train | The way I usually get around this is to set the procedure to execute as owner and then make sure that the owner of the procedure has the correct permissions to perform the decryption, a lot of time the owner of the proc is DBO anyway so no additional configuration needs to be done apart from altering the procedure like so:
ALTER PROCEDURE proc_name
WITH EXECUTE AS OWNER
AS
OPEN SYMMETRIC KEY SSN_Key_01 DECRYPTION BY CERTIFICATE SSCert01;
SELECT
name,
surname,
CONVERT(nvarchar(50),DECRYPTBYKEY(PasswordEnc)) as DecryptedPassword
FROM
[tbl_Users];
CLOSE SYMMETRIC KEY SSN_Key_01;
This means that you don't have to grant any additional permissions at all to your application role or users. | unknown | |
d9231 | train | If you want to save output to a file, rather than just play it, probably the most ideal way of doing this is to generate midi files. There are a few packages that can help you create midi files in a similar way.
This example uses the package called mido
from mido import Message, MidiFile, MidiTrack
mid = MidiFile()
track = MidiTrack()
mid.tracks.append(track)
track.append(Message('note_on', note=64, velocity=64, time=32))
mid.save('new_song.mid')
You can convert midi files to MP3s easily if needed (many programs or even online tools can do this) although, midi files are generally well-supported in most applications.
See also
Although perhaps dated, you can also check the PythonInMusic wiki for references to more MIDI and other audio libraries.
Alternatively, you can use a separate program to record your audio output when running your python script using winsound. | unknown | |
d9232 | train | Here is a different approach
Sub dp()
Dim AR As Long, p1 As Range, n As Long
AR = Cells(Rows.Count, "A").End(xlUp).Row
n = 8
With Range(Cells(8, 1), Cells(AR, 1))
For Each p1 In .Cells
If WorksheetFunction.CountIf(.Cells, p1) > 1 Then
If WorksheetFunction.CountIf(Columns(4), p1) = 0 Then
Cells(n, "D") = p1
n = n + 1
End If
End If
Next p1
End With
End Sub
A: Here are three different techniques:
*
*ArraysList
*ADODB.Recordset
*Array and CountIf
ArraysList
Sub ListDuplicates()
Dim v, listValues, listDups
Set listValues = CreateObject("System.Collections.ArrayList")
Set listDups = CreateObject("System.Collections.ArrayList")
For Each v In Range("A8", Cells(Rows.Count, "A").End(xlUp)).Value
If listValues.Contains(v) And Not listDups.Contains(v) Then listDups.Add v
listValues.Add v
Next
Range("D8").Resize(listDups.Count).Value = Application.Transpose(listDups.ToArray)
End Sub
ADODB.Recordset
Sub QueryDuplicates()
Dim rs As Object, s As String
Set rs = CreateObject("ADODB.Recordset")
s = ActiveSheet.Name & "$" & Range("A7", Cells(Rows.Count, "A").End(xlUp)).Address(False, False)
rs.Open "SELECT [Pivot Table] FROM [" & s & "] GROUP BY [Pivot Table] HAVING COUNT([Pivot Table]) > 1", _
"Provider=MSDASQL;DSN=Excel Files;DBQ=" & ThisWorkbook.FullName
If Not rs.EOF Then Range("D8").CopyFromRecordset rs
rs.Close
Set rs = Nothing
End Sub
Array and CountIf (similar to SJR answer but using an array to gather the data)
Sub ListDuplicatesArray()
Dim v, vDups
Dim x As Long, y As Long
ReDim vDups(x)
With Range("A8", Cells(Rows.Count, "A").End(xlUp))
For Each v In .Value
If WorksheetFunction.CountIf(.Cells, v) > 1 Then
For y = 0 To UBound(vDups)
If vDups(y) = v Then Exit For
Next
If y = UBound(vDups) + 1 Then
ReDim Preserve vDups(x)
vDups(x) = v
x = x + 1
End If
End If
Next
End With
Range("D8").Resize(UBound(vDups) + 1).Value = Application.Transpose(vDups)
End Sub
A: here's another approach:
Option Explicit
Sub main()
Dim vals As Variant, val As Variant
Dim strng As String
With Range(Cells(8, 1), Cells(Rows.count, 1).End(xlUp))
vals = Application.Transpose(.Value)
strng = "|" & Join(vals, "|") & "|"
With .Offset(, 3)
.Value = Application.Transpose(vals)
.RemoveDuplicates Columns:=1, Header:=xlNo
For Each val In .SpecialCells(xlCellTypeConstants)
strng = Replace(strng, val, "", , 1)
Next val
vals = Split(WorksheetFunction.Trim(Replace(strng, "|", " ")), " ")
With .Resize(UBound(vals) + 1)
.Value = Application.Transpose(vals)
.RemoveDuplicates Columns:=1, Header:=xlNo
End With
End With
End With
End Sub
A: another one approach here
Sub dp2()
Dim n&, c As Range, rng As Range, Dic As Object
Set Dic = CreateObject("Scripting.Dictionary")
Dic.comparemode = vbTextCompare
Set rng = Range("A8:A" & Cells(Rows.Count, "A").End(xlUp).Row)
n = 8
For Each c In rng
If Dic.exists(c.Value2) And Dic(c.Value2) = 0 Then
Dic(c.Value2) = 1
Cells(n, "D").Value2 = c.Value2
n = n + 1
ElseIf Not Dic.exists(c.Value2) Then
Dic.Add c.Value2, 0
End If
Next c
End Sub
but if you prefer your own variant, then you need to:
1) replace this line of code: Columns(4).RemoveDuplicates Columns:=Array(1)
by this one:
Range("D8:D" & Cells(Rows.Count, "D").End(xlUp).Row).RemoveDuplicates Columns:=1
2) another problem is in this line of code:
lastrow = Range("D:D").End(xlDown).Row
it will return the row #8 instead of last row that you've expected, so you need to replace it by this one: lastrow = Cells(Rows.Count, "D").End(xlUp).Row
3) also, replace to 1 step -1 by to 8 step -1
so, finally your code can looks like this:
Sub dp()
Dim AR As Long, p1 As Range, p2 As Range, lastrow&, i&
AR = Cells(Rows.Count, "A").End(xlUp).Row
For Each p1 In Range(Cells(8, 1), Cells(AR, 1))
For Each p2 In Range(Cells(8, 1), Cells(AR, 1))
If p1 = p2 And Not p1.Row = p2.Row Then
Cells(p1.Row, 4) = Cells(p1.Row, 1)
Cells(p2.Row, 4) = Cells(p2.Row, 1)
End If
Next p2, p1
Range("D8:D" & Cells(Rows.Count, "D").End(xlUp).Row).RemoveDuplicates Columns:=1
lastrow = Cells(Rows.Count, "D").End(xlUp).Row
For i = lastrow To 8 Step -1
If IsEmpty(Cells(i, "D").Value2) Then
Cells(i, "D").Delete shift:=xlShiftUp
End If
Next i
End Sub | unknown | |
d9233 | train | Change the :before pseudo element so that it displays as an inline-block:
&:before {
content: '\1F847';
color: $green;
padding-right: 8px;
text-decoration: none;
display: inline-block;
}
A: You can apply text-decoration: none to the a, but insert a span into it with the link text inside the span - then put text-decoration: underline of the a span. Notethat I had to rejig your css a little since we don't have a preprocessor.
a {
font-size: 14px;
text-decoration: none;
color: grey;
font-weight: bold;
text-decoration: none;
}
a span {
text-decoration: underline;
}
a:hover {
cursor: pointer;
}
a:before {
content: '\1F847';
color: green;
padding-right: 8px;
text-decoration: none;
}
<a href="#"><span>2015 Highlights</span></a> | unknown | |
d9234 | train | You can use python-pcapng package. First install python-pcapng package by following command.
pip install python-pcapng
Then use following sample code.
from pcapng import FileScanner
with open(r'C:\Users\zahangir\Downloads\MDS19 Wireshark Log 08072021.pcapng', 'rb') as fp:
scanner = FileScanner(fp)
for block in scanner:
print(block)
print(block._raw) #byte type raw data
Above code worked for me.
Reference: https://pypi.org/project/python-pcapng/ | unknown | |
d9235 | train | I had same issue once... I noticed that i had declared the variable colorPrimary in the app level build.gradle file and also in colors.xml. I fixed the error by removing the resource value in the build.gradle file
I had this in my app level build.gradle
defaultConfig {
...
resValue 'color', "colorPrimary", "#2196F3"
}
And this in my colors.xml
<color name="colorPrimary">#C41426</color>
I removed one of them problem solved | unknown | |
d9236 | train | You created an icon factory... factory?
;-)
The serious answer to your question is that you don't need Gtk::IconFactory. Unfortunately the GTK 2 documentation doesn't tell you that it's unnecessary. What you do need is the freedesktop.org Standard Icon Naming Specification. Create your icons, give them simple names, organize according to the directory structure described there, install it to the appropriate place, and your icons will "just work" when you create a Pixbuf or Image using the ...from_icon_name() functions. (example: Gtk::Image::set_from_icon_name())
Here is a page from the Gnome developer wiki on how to provide your own icons:
http://developer.gnome.org/integration-guide/stable/icons.html.en
And here is a page from a tutorial I wrote about installing custom icons: http://ptomato.name/advanced-gtk-techniques/html/desktop-file.html | unknown | |
d9237 | train | I'm not sure why this is happening, but the issue is that imageData size is not equal to width*height
This code should fix it (though it might not be what you're looking for it to do)
public static Bitmap ConvertBitMap(int width, int height, byte[] imageData)
{
var data = new byte[imageData.Length * 4];
int o = 0;
for (var i = 0; i < imageData.Length ; i++)
{
var value = imageData[i];
data[o++] = value;
data[o++] = value;
data[o++] = value;
data[o++] = 0;
}
...
...
..
..
}
A: The problem here is that the length of imageData is less than height * width. Hence you eventually get an exception on this line because i is greater than imageData.Length
var value = imageData[i];
Consider the sizes that you posted in the question
*
*data : 614400
*imageData : 105212
The size of data was calculated as height * width * 4 hence we can calculate height * width by dividing by 4 and we end up with height * width == 153600. This is clearly larger than 105212 and hence you end up accessing outside the bounds of the array | unknown | |
d9238 | train | I would suggest not starting the docker daemon by hand, because not least - the flags and options change.
My running 1.9 docker daemon has flags:
/usr/bin/docker daemon -H fd://
I would suggest what has happened here, is that the docker invocation has changed - it's no longer docker -d - it changed between my two installs (1.7.1 and 1.9.1).
If you look at the docker daemon manpage you'll see the -d flag is gone.
But actually, I'd suggest not running it by hand at all, and look at altering the service invocation. On my Centos6 box, that's /etc/sysconfig/docker (You can see where this is by reading /etc/init.d/docker which is what service invokes) | unknown | |
d9239 | train | In this particular case, a_number names an int object that consumes sizeof(int) bytes and has automatic storage duration. Memory for storage with automatic duration is typically allocated in the stack frame of the function to which the declaration belongs (main() in this case).
a_number effectively becomes a name for the int object stored in these bytes. The name does not exist at runtime, because it is no longer needed at that time. The only purpose of the name is to allow you to refer to the object in code.
A: Variables can stored in memory areas or in processor registers, depending on the compiler and optimization settings.
Let's assume that your compiler is using a stack for function local variables and parameters. Your a_number variable would be placed on the stack since it's lifetime is temporary (will disappear after execution leaves the function).
The compiler is allowed to place the a_number into a processor register. In this case, the variable doesn't exist in memory because processor registers are not in memory (they don't have addresses).
Since your program doesn't use the a_number variable after declaration, the compiler can eliminate the variable and not use memory. There is no difference in behavior of your program with or without the variable; thus the compiler can eliminate the variable.
The location of your variable depends on your compiler. You compiler can store variables "on the stack", in a processor register or eliminate the variable. The location also depends on the "optimization setting" on your compiler. Some compilers may not optimize on the lowest settings and remove the variable on the higher settings.
A:
but where is a_number itself?
Just where you see it, in the source code file. The compiler sees it there and keeps track of it, generating what code it needs to. If you have debugging turned on, then the symbol is stored along with the code in a special look up table so you can see it in the debugger as well. | unknown | |
d9240 | train | You can use this code to create navbar
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="#">Navbar</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item active">
<a class="nav-link" href="#">Home <span class="sr-only">(current)</span></a>
</li>
<li class="nav-item">
<a class="nav-link" href="#">Link</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
Dropdown
</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
<a class="dropdown-item" href="#">Action</a>
<a class="dropdown-item" href="#">Another action</a>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="#">Something else here</a>
</div>
</li>
<li class="nav-item">
<a class="nav-link disabled" href="#">Disabled</a>
</li>
</ul>
<form class="form-inline my-2 my-lg-0">
<input class="form-control mr-sm-2" type="search" placeholder="Search" aria-label="Search">
<button class="btn btn-outline-success my-2 my-sm-0" type="submit">Search</button>
</form>
</div>
</nav> | unknown | |
d9241 | train | const winUtils = require("sdk/deprecated/window-utils");
var searchbar = winUtils.activeBrowserWindow.document.getElementById("searchbar"); | unknown | |
d9242 | train | If its a spring project there would be two locations for properties
src/main/resources
src/test/resources
If you run tests it will pick from src/test/resources.
A: @RunWith(SpringRunner.class)
@DataJpaTest
public class AccessPropertiesTest {
@Value("${my.spring.greeting}")
String greeting;
.....
}
refer https://www.baeldung.com/spring-boot-testing
add: my.spring.greeting=anyValue into application.properties or application.properties.yaml file | unknown | |
d9243 | train | in css only Write This
.carousel-inner > .item > img {
height:500px;
}
If I use the provided class so its also effecting width but when I used min-height with your class so its working fine but only on laptop and small screen sizes but also stretching. its not working with large screens
A: Plz use it
<script>
$( document ).ready(function() {
function mySlider() {
var winH = $(window).height();
var winW = $(window).width();
var fourBoxH = $('#fourBox').height();
var finalSliderH = winH - fourBoxH;
$('.carousel-inner').css( {'height' : finalSliderH+'px', 'width' : winW+'px', });
};
window.onload = mySlider();
});
</script>
css >>
.carousel-inner { }
.carousel-inner img { width:100% }
on 4 box container add id >>
<div class="container-fluid" id="fourBox"> | unknown | |
d9244 | train | You should use onItemClickListener or OnClickListener
recyclerView.setOnClickListener(new View.OnClickListener() {
...
}
A: In the onBindViewHolder function add onClickListner on items.
@Override
public void onBindViewHolder(DataAdapter.ViewHolder viewHolder, int i) {
viewHolder.tv_country.setText(countries.get(i));
viewHolder.tv_country.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
}
});
}
In this example i added onCLickListner on tv_country. If you need further help ask me in comments. | unknown | |
d9245 | train | So, what @jared gotte said - "adaptive" implies a web page that can adapt to the device capabilities without having to serve up different content from the server. So in that regard your question is a bit nonsensical.
But, that said, the way most [large] sites handle serving different content to mobile .vs. desktop is by setting up different subdomains. For example Facebook uses www.facebook.com for the desktop version of the site, and m.facebook.com for the mobile version. When a user first hits the site, the server looks at the User-Agent header to decide what type of device they're using and redirects them appropriately. If/when you want to switch them between the two on the client, you can use JS to redirect their browser.
The caveat to this is that you'll need to setup the DNS hostname(s), and make your server code aware of the Host header on incoming requests.
A: Put your desktop CSS in one CSS file. Put your adaptive stuff in another CSS file. Serve both to the user. But then if a user says "show me the desktop version", stop serving them the adaptive CSS file. | unknown | |
d9246 | train | In the last few years, Eric Evans has recognized an update to his DDD pattern: Domain Events (aka External Events concept).
Internal events in Event Sourcing patterns is what we've been focusing on, such as UserCreatedEvent in your example. Keep these explicit with an IEvent marker interface.
While IEvents are stull published on the bus, IDomainEvents are more notability for larger external-to-the-domain notifications that don't effect a state of an aggregate per say.
So...
CreateUser (ICommand)
^- CreateUserCommandHandler
UserCreated (IEvent)
^- UserCreatedEventHandler
SendNewUserEmail (ICommand)
^- SendNewUserEmailCommandHandler
NewUserEmailSent (IDomainEvent)
^- UserRegistrationService or some other AC
I am still pretty new to event sourcing myself; but, I would guess that you can have the UserRegistrationService register on the bus to listen for the SendNewUserEmail ICommand.
Either way you go, I would concentrate on creating additional commands/events for sending an email and the email was sent. Then, later on you can view the transaction log as to when it was queued to send, how long it took to send, was there any retries in sending, how many was sent at the same time and did it effect time delays (datetime diffs) to show any bottlenecks?, install a queue for sending emails and break it out into a smaller independent service, etc etc. | unknown | |
d9247 | train | Looks like it's a simple REST service call: https://graph.facebook.com/10150146071831729
Take a look at the "Example" at the bottom of this page: https://developers.facebook.com/docs/reference/api/photo/ | unknown | |
d9248 | train | I don't see mysql driver dependency in your pom, it could be running fine on the server because your server may have the driver jar, in that case adding mysql driver dependency in provided scope should solve the issue. | unknown | |
d9249 | train | I think this should be more obvious and should work without any tweaking. But still, it's pretty easy.
The solution has two parts:
*
*Create DataRelation between child and parent tables and set it to cascade on updates.
That way whenever parent Id changes, all children will be updated.
Dim rel = ds.Relations.Add(parentTab.Columns("Id"), childTab.Columns("ParentId"))
rel.ChildKeyConstraint.UpdateRule = Rule.Cascade
*Dataset insert and update commands are two-way:
If there are any output parameters bound or any data rows returned,
they will be used to update dataset row that caused the update.
This is most useful for this particular problem: getting autogenerated columns
back to application.
Apart from identity this might be for example a timestamp column.
But identity is most useful.
All we need to do is set insert command to return identity.
There are several ways to do it, for example:
a) Using stored procedure with output parameter. This is the most portable way among "real" databases.
b) Using multiple SQL statements, with last one returning inserted row. This is AFAIK specific to SQL Server, but the simplest:
insert into Parent (Col1, Col2, ...) values (@Col1, @Col2, ...);
select * from Parent where Id = SCOPE_IDENTITY();
After setting this up, all you need to do is create parent rows with Ids that are unique (within single dataset) but impossible in the database. Negative numbers are usually a good choice. Then, when you save dataset changes to database, all new parent rows will get real Ids from database.
Note: If you happen to work with database without multiple statements supports and without stored procedures (e.g. Access), you will need to setup event handler on RowUpdated event in parent table adapter. In the hanler you need to get identity with select @@IDENTITY command.
Some links:
*
*MSDN: Retrieving Identity or Autonumber Values (ADO.NET)
*MSDN: Managing an @@IDENTITY Crisis
*Retrieving Identity or Autonumber Values into Datasets
*C# Learnings: Updating identity columns in a Dataset
A: Couple of things to point out.
*
*Yes, you definitely need relations assigned for both tables. You can check from xsd editor (double click your xsd file). By default the relation is set as 'relation only' which doesn't has any 'update rule'. Edit this relation by going into 'edit relation' and select 'Foreign Key Constraint Only' or 'Both~~~' one. And need to set 'Update Rule' as Cascade! 'Delete Rule' is up to you.
*Now when you use a new parent table row's ID (AutoIncrement) for new child table rows as a foreign key, You have to add the parent row into the table first before you use the new parent row's ID around.
*As soon as you call Update for the parent table using tableadapter, the associated child table's new rows will have correct parentID AUTOMATICALLY.
My simple code snippets:
'--- Make Parent Row
Dim drOrder as TOrderRow = myDS.TOder.NewTOrderRow
drOrder.SomeValue = "SomeValue"
myDS.TOrder.AddTOrderRow(drOrder) '===> THIS SHOULD BE DONE BEFORE CHILD ROWS
'--- Now Add Child Rows!!! there are multiple ways to add a row into tables....
myDS.TOrderDetail.AddTOrderDetailRow(drOrder, "detailValue1")
myDS.TOrderDetail.AddTOrderDetailRow(drOrder, "detailvalue2")
'....
'....
'--- Update Parent table first
myTableAdapterTOrder.Update(myDS.TOrder)
'--- As soon as you run this Update above for parent, the new parent row's AutoID(-1)
'--- will become real number given by SQL server. And also new rows in child table will
'--- have the updated parentID
'--- Now update child table
myTableAdapterTOrderDetail.Update(myDS.TOrderDetail)
I hope it helps!
A: And if you don't want to use datasets and yet get the ids assigned to childs and getting the ids so that you can update your model:
https://danielwertheim.wordpress.com/2010/10/24/c-batch-identity-inserts/ | unknown | |
d9250 | train | Apologies in advance if I misunderstood your question, but it sounds like you'd like to use JavaScript from another location on your site.
Using the example above, here's what that would look like:
<html>
<head>
<title>Title of the document</title>
</head>
<body>
The content of the document......</p>
The <a href="http://sitelink.com">link</a> of the document ......
<script type="text/javascript" src="http://mysite.com/java.js"></script>
</body>
</html>
You could also link to it in the <head> instead, but it's better for performance if the scripts are placed in the footer.
A: your anchor:
href="javascript:linksomething()"
and js:
function linksomething(){
window.location.href=url;
}
is this what you want? | unknown | |
d9251 | train | Not exactly sure how to answer what's in your question's title, aside from running some type of update operation to update all of the ttl properties.
As far as enabling TTL itself: TTL is enabled in the collection settings:
You'll need to choose a default ttl for documents without a ttl property (which can be -1 for a default of "do not expire."
A: You are out of luck. The ttl field is hard-coded. You'll need to migrate your existing ttl field to a new field name, maybe old_ttl and enable DocumentDB's ttl functionality after that migration is done. No other choice. | unknown | |
d9252 | train | You have to calculate the densities first, then assign the values to the points so you can map that aesthetic:
library(ggplot2)
library(ggthemes)
library(scales)
library(ggmap)
library(MASS)
library(sp)
library(viridis)
pop <- read.csv("~/Dropbox/PopulationDensity.csv", header=TRUE, stringsAsFactors=FALSE)
# get density polygons
dens <- contourLines(
kde2d(pop$LONG, pop$LAT,
lims=c(expand_range(range(pop$LONG), add=0.5),
expand_range(range(pop$LAT), add=0.5))))
# this will be the color aesthetic mapping
pop$Density <- 0
# density levels go from lowest to highest, so iterate over the
# list of polygons (which are in level order), figure out which
# points are in that polygon and assign the level to them
for (i in 1:length(dens)) {
tmp <- point.in.polygon(pop$LONG, pop$LAT, dens[[i]]$x, dens[[i]]$y)
pop$Density[which(tmp==1)] <- dens[[i]]$level
}
Canada <- get_map(location="Canada", zoom=3, maptype="terrain")
gg <- ggmap(Canada, extent="normal")
gg <- gg + geom_point(data=pop, aes(x=LONG, y=LAT, color=Density))
gg <- gg + scale_color_viridis()
gg <- gg + theme_map()
gg <- gg + theme(legend.position="none")
gg | unknown | |
d9253 | train | just curious why you are using the same variable name for your file and then as your filehandler and then again in your next with function.
_io.TextIOWrapper is the object from your previous open, which has been asssigned to the setFile variable.
try:
with open(setFile, 'r') as readFile:
olddata = readFile.readlines()
newdata = ''
for line in olddata:
newdata += re.sub(regex, newset, line)
with open(setFile, 'w') as writeFile:
writeFile.write(newdata) | unknown | |
d9254 | train | You need to define the fields you need to import in the schema.xml.
The DIH does not autogenerate the fields and it is better to create the fields if the amount the fields are less.
Solr also allows you to define Dynamic fields, where the fields need not be explicitly defined but just needs to match the regex pattern.
<dynamicField name="*_i" type="integer" indexed="true" stored="true"/>
You can also define a catch field with Solr, however the behaviour cannot be control as same analysis would be applied to all the fields. | unknown | |
d9255 | train | As a workaround, instead of using an Assert.IsTrue like that, you could try something like:
numbers = GetListOfNumbers()
List<number> fails = numbers.Where(currentNum=>!TestNumber(curentNum))
if (fails.Count > 0)
Assert.Fail(/*Do whatever with list of fails*/)
A: NUnit 2.5 has data-driven testing; this will do exactly what you need. It'll iterate over all of your data and generate individual test cases for each number.
Link
A: This can be done in MBUnit using a "RowTest" test method. I'm not aware of a way of doing this in NUnit, however. | unknown | |
d9256 | train | You need to make your googlePlace function actually return a promise:
function googlePlace(airport) {
// notice the new Promise here
return new Promise((resolve, reject) => {
https
.get(
"https://maps.googleapis.com/maps/api/place/findplacefromtext/json?input=" +
airport +
"&inputtype=textquery&fields=geometry",
(resp) => {
let data = "";
// A chunk of data has been recieved.
resp.on("data", (chunk) => {
data += chunk;
});
//All data is returned
resp.on("end", () => {
data = JSON.parse(data);
let obj = {
name: airport,
location: data.candidates[0].geometry.location,
};
console.log("latlong should print after this");
// notice the resolve here
resolve(obj);
});
}
)
.on("error", (err) => {
console.log("Error: " + err.message);
reject(err.message);
});
});
} | unknown | |
d9257 | train | It seems, Input library is damaged. Replace the following file with an original one.
system/core/Input.php
Or try a fresh install. | unknown | |
d9258 | train | A) and ' which will show nothing, and other formula or possibly entry that will affect the actual value. Thanks in advance.
Private Sub testInputBox_Click()
Dim x As String
Dim y As String
Dim yDefault As String
Dim found As Range
x = InputBox("Enter Parts No.", "Edit Description")
If (x <> "") Then
If (WorksheetFunction.CountIf(Worksheets("Sheet1").Range("E2:E27"), x) > 0) Then
Set found = Worksheets("Sheet1").Range("E2:E27").Find(x, LookIn:=xlValues)
yDefault = found.Offset(0, 1).Text
y = InputBox("Amend Description", "Edit Description", yDefault)
Else
MsgBox ("Not found!")
End If
If (y <> "") Then 'Filter should be done here
If MsgBox("Proceed to edit?", vbYesNo, "Confirmation") = vbNo Then
Else
found.Offset(0, 1).Value = CStr(y)
End If
End If
End If
End Sub
A: You could use different attempts to filter some or all required values. Keep in mind that you y variable is string type. Therefore here are some of ideas with some comments:
'tests for starting characters
If y <> "'" And y <> "=" Then
'test for formulas
If UCase(Left(y, 4)) <> "=SUM" Then
'test for any string within other string
If InStr(1, y, "sum", vbTextCompare) = 0 Then
'...here your code
End If
End If
End If
you could combine them all into one if...then statement using and or or operators. | unknown | |
d9259 | train | This is a plugin for eclipse which costs money. http://www.dvteclipse.com/ I've never tried it.
Most people at my work use VIM or emacs to edit e-files. I use JEdit.
Here's a crash-course on Specman. | unknown | |
d9260 | train | Please first make sure that, your JVM is working or not. Try to start JVM from command prompt. It you are able to launch java.exe file then there are some problems with your project.
You are using netbeans. So before starting netBeans remove cache of netBeans. There are changes that your netBeans is pointing to old class path of jProfiler.
Regards,
Gunjan. | unknown | |
d9261 | train | This approach ends up being slower than the pivot but it's a got a different trick so I'll include it.
df2=pl.from_pandas(df)
df2_ans=(df2.with_row_count('userId').with_column(pl.col('segments').str.split(',')).explode('segments') \
.with_columns([pl.when(pl.col('segments')==pl.lit(str(i))).then(pl.lit(1,pl.Int32)).otherwise(pl.lit(0,pl.Int32)).alias(str(i)) for i in range(1000)]) \
.groupby('userId')).agg(pl.exclude('segments').sum())
df_one_hot_encoded = df2_ans.to_pandas()
A couple of other observations. I'm not sure if you checked the output of your str.contains method but I would think that wouldn't work because, for example, 15 is contained within 154 when looking at strings.
The other thing, which I guess is just a preference, is the with_row_count syntax vs the pl.arrange. I don't think the performance of either is better (at least not significantly so) but you don't have to reference the df name to get the len of it which is nice.
I tried a couple other things that were also worse including not doing the explode and just doing is_in but that was slower. I tried using bools instead of 1s and 0s and then aggregating with any but that was slower. | unknown | |
d9262 | train | There're several ways to do that.
*
*1) Add y Axis grid text
*2) Simply, add some normal DOM element(ex. div, etc.) and handle it to be positioned over chart element
I'll try put some example using the first one and check for the ygrids() API doc.
// variable to hold min and max value
var min;
var max;
var chart = bb.generate({
data: {
columns: [
["sample", 30, 200, 100, 400, 150, 250]
],
onmin: function(data) {
min = data[0].value;
},
onmax: function(data) {
max = data[0].value;
}
}
});
// add y grids with deterimined css classes to style texts added
chart.ygrids([
{value: min, text: "Value is smaller than Y max", position: "start", class: "min"},
{value: max, text: "Value is greater than Y max", position: "start", class: "max"}
]);
/* to hide grid line */
.min line, .max line{
display:none;
}
/* text styling */
.min text, .max text{
font-size:20px;
transform: translateX(10px);
}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/billboard.js/dist/billboard.min.css" />
<script src="https://cdn.jsdelivr.net/npm/billboard.js/dist/billboard.pkgd.min.js"></script>
<title>billboard.js</title>
</head>
<body>
<div id="chart"></div>
</body>
</html> | unknown | |
d9263 | train | It looks like vue-rellax was never rewritten for Vue 3. You're likely better off to use the rellax library and import it into your components or as a window variable.
App.vue:
<script setup>
import { onMounted } from 'vue';
import Rellax from 'rellax'
onMounted(() => {
let rellax = new Rellax('.rellax');
})
</script> | unknown | |
d9264 | train | I think it is a unexpected problem, try new device to fix .
But you can set scale type with 'AUTO' in emulator profile setting.
you can drag border of emulator to scale it. | unknown | |
d9265 | train | Add the font to your project, change its Build Action to Content. Then just reference it inline or as part of a Style or BasedOn value like;
<TextBlock FontFamily="/Fonts/New12.ttf#New12" Text="Check out my awesome font!" />
That should do it for you.
A: I found the solution,
For the FontFamily value you can write
"ms-appdata:///local/MyFont.ttf#FontName"
(where local is ApplicationData::Current->LocalFolder) | unknown | |
d9266 | train | For some simple apps, it is possible to design your iPhone UI and reuse the same xib file for the iPad. Just select your Target in XCode and copy the Main Interface text from iPhone / iPod Deployment Info to iPad Deployment Info. If you're using a Main Storyboard, copy that too. However, the iPad does not simply scale everything up from the 320*480 / 640*960 iPhone screen to the 768*1024 / 1536*2048 iPad screen. @elgarva correctly says that this would look terrible. Instead, the iPad version makes use of your autosizing masks to resize or reposition each view.
If all of your views can be considered to be left-middle-right or top-middle-bottom, this may work. If you have anything more complicated, you'll need to design a separate iPad interface.
Duplicating your iPhone UI is not just discouraged for aesthetic reasons - iPhones often end up containing a deep and confusing navigation tree for tasks that the iPad can fit on a single screen.
A: The main reason, is that if you just scale the elements on the UI to fit the larger screen, it wouldn't look nice... and you don't need to do anything for it to work, it automatically does it for you if your app is iPhone only and installed on an iPad (if the user chooses to).
Having a different XIB lets you rearrange your app, and think it so that you can take advantage of the larger screen. You can probably show more information on one iPad view than on 3 different screens on the iPhone... so, your iPhone app could show basic info and expand it when the user taps on it, while your iPad version could show all the information on load, plus extra graphics that look nice but aren't needed, and wouldn't make sense on the iPhone screen.
PS: If you're starting a new app, I strongly suggest you using the storyboard if your app won't have a lot of views... it's really easy to get started and it lets you see your app flow at a glance.
A: The ratina display just doubles the resolution of original iPhone. If you don't provide separate graphics for retina display, then system just doubles the resolution of resources.
The points are related to physical size of screen, which is similar in old and new iPhones.
For iPads, the screen size changes. This means that its dimension in points will be different from that of iPhone.
A: duplicating the xib file and renaming that as filename~ipad.xib is working great for me in ios6.1 | unknown | |
d9267 | train | Since you don't show the code that gets the date of your object, this question is impossible to answer without some knowledge of the Outlook object you are trying to access.
If you have an array of objects you can sort them by date and filter ones prior to a certain one.
my $sub = sub {
my $ad = $a->date_string_accessor;
my $bd = $b->date_string_accessor;
$ad =~ s:(\d+)/(\d+)/(\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;
$bd =~ s:(\d+)/(\d+)/(\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;
return $ad cmp $bd;
};
my @sorted = sort $sub @unsorted;
print join("\n", @sorted);
But it would seem to me that you should use the application itself to do this -- presumably Outlook has some sort of query/sort functionality. | unknown | |
d9268 | train | Those aren't trash characters, they're the Unicode Replacement Character returned when bytes are decoded into text using the wrong character set.
The very fact you got readable text means decrypting succeeded. It's decoding the bytes into text that failed.
The bug is in the Java code. It's using the String(byte[]) which, according to the docs:
Constructs a new String by decoding the specified array of bytes using the platform's default charset.
That's obviously not UTF8. The String(byte[] bytes,Charset charset) or String(byte[] bytes,String charsetName) constructors should be used instead, passing the correct character set, eg :
byte[] decryptedBytes = cipher.doFinal(....);
return new String(decryptedBytes, StandardCharsets.UTF_8);
The hacky alternative is to change the remote server's default character set to UTF8. | unknown | |
d9269 | train | You can try this:
in this you get for every vertex the outgoing edges
select $a.@rid, $a.outE() from 'your class'
let $a = (select from 'your class' where $parent.current.@rid = @rid)
if you want the ingoing vertices you have to change $a.outE() with inE(), like below:
select $a.@rid, $a.inE() from 'your class'
let $a = (select from 'your class' where $parent.current.@rid = @rid)
Hope it helps.
Regards. | unknown | |
d9270 | train | My suggestion would be to generate a list of n-grams from the key phrase and calculate the edit distance between each n-gram and the key phrase.
Example:
key phrase: "What is your name"
phrase 1: "hi, my name is john doe. I live in new york. What is your name?"
phrase 2: "My name is Bruce. wht's your name"
A possible matching n-gram would be between 3 and 4 words long, therefore we create all 3-grams and 4-grams for each phrase, we should also normalize the string by removing punctuation and lowercasing everything.
phrase 1 3-grams:
"hi my name", "my name is", "name is john", "is john doe", "john doe I", "doe I live"... "what is your", "is your name"
phrase 1 4-grams:
"hi my name is", "my name is john doe", "name is john doe I", "is john doe I live"... "what is your name"
phrase 2 3-grams:
"my name is", "name is bruce", "is bruce wht's", "bruce wht's your", "wht's your name"
phrase 2 4-grmas:
"my name is bruce", "name is bruce wht's", "is bruce wht's your", "bruce wht's your name"
Next you can do levenstein distance on each n-gram this should solve the use case you presented above. if you need to further normalize each word you can use phonetic encoders such as Double Metaphone or NYSIIS, however, I did a test with all the "common" phonetic encoders and in your case it didn't show significant improvement, phonetic encoders are more suitable for names.
I have limited experience with PHP but here is a code example:
<?php
function extract_ngrams($phrase, $min_words, $max_words) {
echo "Calculating N-Grams for phrase: $phrase\n";
$ngrams = array();
$words = str_word_count(strtolower($phrase), 1);
$word_count = count($words);
for ($i = 0; $i <= $word_count - $min_words; $i++) {
for ($j = $min_words; $j <= $max_words && ($j + $i) <= $word_count; $j++) {
$ngrams[] = implode(' ',array_slice($words, $i, $j));
}
}
return array_unique($ngrams);
}
function contains_key_phrase($ngrams, $key) {
foreach ($ngrams as $ngram) {
if (levenshtein($key, $ngram) < 5) {
echo "found match: $ngram\n";
return true;
}
}
return false;
}
$key_phrase = "what is your name";
$phrases = array(
"hi, my name is john doe. I live in new york. What is your name?",
"My name is Bruce. wht's your name"
);
$min_words = 3;
$max_words = 4;
foreach ($phrases as $phrase) {
$ngrams = extract_ngrams($phrase, $min_words, $max_words);
if (contains_key_phrase($ngrams,$key_phrase)) {
echo "Phrase [$phrase] contains the key phrase [$key_phrase]\n";
}
}
?>
And the output is something like this:
Calculating N-Grams for phrase: hi, my name is john doe. I live in new york. What is your name?
found match: what is your name
Phrase [hi, my name is john doe. I live in new york. What is your name?] contains the key phrase [what is your name]
Calculating N-Grams for phrase: My name is Bruce. wht's your name
found match: wht's your name
Phrase [My name is Bruce. wht's your name] contains the key phrase [what is your name]
EDIT: I noticed some suggestions to add phonetic encoding to each word in the generated n-gram. I'm not sure phonetic encoding is the best answer to this problem as they are mostly tuned to stemming names (american, german or french depending on the algorithm) and are not very good at stemming plain words.
I actually wrote a test to validate this in Java (as the encoders are more readily available) here is the output:
===========================
Created new phonetic matcher
Engine: Caverphone2
Key Phrase: what is your name
Encoded Key Phrase: WT11111111 AS11111111 YA11111111 NM11111111
Found match: [What is your name?] Encoded: WT11111111 AS11111111 YA11111111 NM11111111
Phrase: [hi, my name is john doe. I live in new york. What is your name?] MATCH: true
Phrase: [My name is Bruce. wht's your name] MATCH: false
===========================
Created new phonetic matcher
Engine: DoubleMetaphone
Key Phrase: what is your name
Encoded Key Phrase: AT AS AR NM
Found match: [What is your] Encoded: AT AS AR
Phrase: [hi, my name is john doe. I live in new york. What is your name?] MATCH: true
Found match: [wht's your name] Encoded: ATS AR NM
Phrase: [My name is Bruce. wht's your name] MATCH: true
===========================
Created new phonetic matcher
Engine: Nysiis
Key Phrase: what is your name
Encoded Key Phrase: WAT I YAR NAN
Found match: [What is your name?] Encoded: WAT I YAR NAN
Phrase: [hi, my name is john doe. I live in new york. What is your name?] MATCH: true
Found match: [wht's your name] Encoded: WT YAR NAN
Phrase: [My name is Bruce. wht's your name] MATCH: true
===========================
Created new phonetic matcher
Engine: Soundex
Key Phrase: what is your name
Encoded Key Phrase: W300 I200 Y600 N500
Found match: [What is your name?] Encoded: W300 I200 Y600 N500
Phrase: [hi, my name is john doe. I live in new york. What is your name?] MATCH: true
Phrase: [My name is Bruce. wht's your name] MATCH: false
===========================
Created new phonetic matcher
Engine: RefinedSoundex
Key Phrase: what is your name
Encoded Key Phrase: W06 I03 Y09 N8080
Found match: [What is your name?] Encoded: W06 I03 Y09 N8080
Phrase: [hi, my name is john doe. I live in new york. What is your name?] MATCH: true
Found match: [wht's your name] Encoded: W063 Y09 N8080
Phrase: [My name is Bruce. wht's your name] MATCH: true
I used a levenshtein distance of 4 when running these tests, but I am pretty sure you can find multiple edge cases where using the phonetic encoder will fail to match correctly. by looking at the example you can see that because of the stemming done by the encoders you are actually more likely to have false positives when using them in this way. keep in mind that these algorithms are originally intended to find those people in the population census that have the same name and not really which english words 'sound' the same.
A: What you are trying to achieve is a quite complex natural language processing task and it usually requires parsing among other things.
What I am going to suggest is to create a sentence tokenizer that will split the phrase into sentences. Then tokenize each sentence splitting on whitespace, punctuation and probably also rewriting some abbreviations to a more normal form.
Then, you can create custom logic that traverses the token list of each sentence looking for specific meaning. Ex.: ['...','what','...','...','your','name','...','...','?'] can also mean what is your name. The sentence could be "So, what is your name really?" or "What could your name be?"
I am adding code as an example. I am not saying you should use something that simple. The code below uses NlpTools a natural language processing library in php (I am involved in the library so feel free to assume I am biased).
<?php
include('vendor/autoload.php');
use \NlpTools\Tokenizers\ClassifierBasedTokenizer;
use \NlpTools\Classifiers\Classifier;
use \NlpTools\Tokenizers\WhitespaceTokenizer;
use \NlpTools\Tokenizers\WhitespaceAndPunctuationTokenizer;
use \NlpTools\Documents\Document;
class EndOfSentence implements Classifier
{
public function classify(array $classes, Document $d)
{
list($token, $before, $after) = $d->getDocumentData();
$lastchar = substr($token, -1);
$dotcnt = count(explode('.',$token))-1;
if (count($after)==0)
return 'EOW';
// for some abbreviations
if ($dotcnt>1)
return 'O';
if (in_array($lastchar, array(".","?","!")))
return 'EOW';
}
}
function normalize($s) {
// get this somewhere static
$hash_table = array(
'whats'=>'what is',
'whts'=>'what is',
'what\'s'=>'what is',
'\'s'=>'is',
'n\'t'=>'not',
'ur'=>'your'
// .... more ....
);
$s = mb_strtolower($s,'utf-8');
if (isset($hash_table[$s]))
return $hash_table[$s];
return $s;
}
$whitespace_tok = new WhitespaceTokenizer();
$punct_tok = new WhitespaceAndPunctuationTokenizer();
$sentence_tok = new ClassifierBasedTokenizer(
new EndOfSentence(),
$whitespace_tok
);
$text = 'hi, my name is john doe. I live in new york. What\'s your name? whts ur name';
foreach ($sentence_tok->tokenize($text) as $sentence) {
$words = $whitespace_tok->tokenize($sentence);
$words = array_map(
'normalize',
$words
);
$words = call_user_func_array(
'array_merge',
array_map(
array($punct_tok,'tokenize'),
$words
)
);
// decide what this sequence of tokens is
print_r($words);
}
A: First of all fix all short codes example wht's insted of whats
$txt=$_POST['txt']
$txt=str_ireplace("hw r u","how are You",$txt);
$txt=str_ireplace(" hw "," how ",$txt);//remember an space before and after phrase is required else it will replace all occurrence of hw(even inside a word if hw exists).
$txt=str_ireplace(" r "," are ",$txt);
$txt=str_ireplace(" u "," you ",$txt);
$txt=str_ireplace(" wht's "," What is ",$txt);
Similarly Add as many phrases as you want..
now just check all possible questions in this text & get their position
if (strpos($phrase,"What is your name")) {//No need to add "!=" false
return $response;
}
A: You may think of using the soundex function to convert the input string into a phonetically equivalant writing, and then proceed with your search.
soundex | unknown | |
d9271 | train | I do not know if you already solved this issue.
The solution for me was use the view ID instead of account ID on the analytics account.
The view ID is on the third column in settings, on Google Analytics administration panel.
Sorry for my english. | unknown | |
d9272 | train | You should use a new XHR object for each request
for (let index = 0; index < lists.length; index++) {
let countdata = new XMLHttpRequest();
Api2Url = URLstart2 + lists[index].value+"&key="+ApiKey+ URLend2;
countdata.onload = function() {
API2Response = JSON.parse(this.responseText);
lists[index].count = API2Response.items[0].statistics.videoCount;
}
countdata.open("GET", Api2Url, true);
countdata.send();
}
Since you want to use lists after the for loop I suggest taking a look at fetch and await/async. | unknown | |
d9273 | train | case 0x06: case 0x16:
case 0x0E: case 0x1E: opcode.mnemonic = "asl"; break;
default: opcode.mnemonic = null;
}
//Opcodes.valueOf(this.mnemonic).run();
}
public void testOpcodes(){
opcode.code = 0;
while((opcode.code & 0xFF) < 0xFF){
//System.out.printf("PC = 0x%04X \n", PC);
exec();
if(opcode.mnemonic != null)
opcode.print();
//Opcode.valueOf(opcode.mnemonic).run();
opcode.code++;
}
}
public static void main(String[] args) {
// TODO Auto-generated method stub
CPU cpu = new CPU(true);
cpu.init();
cpu.testOpcodes();
}
}
A: Well, I think that's the start of a good way to write a 6502 CPU emulator. But it needs some work ...
Ask yourself: What is an enum in Java? And what is it good for?
It's basically a class with a fixed number of instances, so it's great for representing static data (and behaviour) and grouping that - with associated methods - to be easily visible, testable, modifiable.
In various methods you have switch statements that break out the different addressing modes and operations for each opcode:
switch(opcode) {
case 0x00: return "Immediate";
case 0x04: return "ZeroPaged";
case 0x0C: return "Absolute";
case 0x14: return "IndexedZeroPagedX";
case 0x1C: return "IndexedAbsoluteX";
default: return "Type 0 undefined";
}
And we would have to add more switch statements if we wanted instructions times etc.
But this is static data. Those cases constants should be enum properties. Isn't this the kind of data that should be encoded in the enum? I think so, and so did Brendan Robert, who wrote JACE, the Java Apple Computer Emulator. His code is a great example of a well thought-out Java enum.
Here are the first few lines of his 6502 CPU's OPCODE enum:
public enum OPCODE {
ADC_IMM(0x0069, COMMAND.ADC, MODE.IMMEDIATE, 2),
ADC_ZP(0x0065, COMMAND.ADC, MODE.ZEROPAGE, 3),
ADC_ZP_X(0x0075, COMMAND.ADC, MODE.ZEROPAGE_X, 4),
// ...
}
All the static data is grouped together nicely, easily visible, ready to be used in case statements etc.
A: I can't speak best or worst, I can only speak to what I did.
I have an OpCode class, and I create an instance of this class for each opcode (0-255, undefined opcodes are NOPs on my machine).
My OpCode contains two methods. One represents the addressing mode, the other the actual instruction.
Here's my execute method:
public void execute(CPU cpu) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException {
Integer value = (Integer) addrMethod.invoke(cpu);
opMethod.invoke(cpu, value);
}
I build up my list of OpCodes with a list of strings, such as ORA abs. ORA is mapped to a logic method in my CPU class, and abs is mapped to another addressing method. I use reflecting to lookup the methods and stuff them in to my OpCode instances.
public void ORA(int value) {
acc = acc | value;
setFlagsNZ(acc);
}
public int fetchAbsolute() {
int addr = addrAbsolute();
return fetchByte(addr);
}
addrAbsolute will pull the 2 bytes from memory, and increment the PC by 2, among other things. fetchByte gets the value at the address. The value is then passed in to ORA, which acts on the accumulator.
In the end, I have 256 opcodes with a method for the logic, and a method for the addressing.
The core of my simulator is simply setting the initial address, fetching the opcode from that address, increment the address, the execute the opcode.
int code = mem.fetchByte(pc++);
OpCode op = Instructions.opCodes[code];
op.execute(this);
(this being the CPU instance).
Mind, mine is a soft simulator, it doesn't strive for cycle parity or anything like that. It doesn't simulate any specific hardware (like a C64). It's a raw 6502 with a few dedicated memory locations that I/O to a terminal.
But this is what came out of my little head. I didn't study other simulators, I wasn't motivated to go sussing out bit patterns within the instructions. I just make a table for every possible opcode and what it was supposed to do. | unknown | |
d9274 | train | Something is wrong with the output, because it spits syntax error, when I copy it to console, so I'll assume that you have tr, with td, img and other td inside.
Pattern matching can be used for unwrapping this way. Lets say, you have your whole data in variable Data, you can extract the contents with:
[TR] = Data,
{_Name, _Attrs, Contents} = TR.
Now Contents is again a list of nodes: [Td1, Img, Td2], so you can do:
[Td1, Img, Td2] = Contents.
And so on, until you actually reach your contents. But writing that is pretty tedious and you can use recursion instead. Lets define contents function, that recursively scans the elements.
contents({_Name, _Attrs, Contents}) ->
case Contents of
[] -> []; % no contents like in img tag
[H | T] ->
case is_tuple(H) of
% tupe means list of children
true -> lists:map(fun contents/1, [H | T]);
% otherwise this must be a string
_ -> [H | T]
end
end.
This will return nested list, but you can at the end run lists:flatten like this:
Values = lists:flatten(contents(Data)).
A: The curly and square brackets are not balanced in your example. I guess it is missing a }] at the end.
It seems that the deepness of the expression may vary, so you have to explore it recursively. The code belows does it, assuming that you will find the information in type "a" elements:
-module (ext).
-compile([export_all]).
extL(L) -> extL(L,[]).
extL([],R) -> R;
extL([H|Q],R) -> extL(Q, extT(H) ++ R).
extT({"a",PropL,L}) ->
case proplists:get_value("class",PropL) of
"lnkUserName" -> [{user_name, hd(L)}];
"PostDisplay" -> [{post_display,hd(L)}];
_ -> extL(L,[])
end;
extT({_,_,L}) -> extL(L,[]).
with your example it returns the proplist [{post_display,"This is a message"},{user_name,"UserNameA"}] | unknown | |
d9275 | train | You can refactor it into something like below:
$groups = @{
"$pwd\servers\servers_1.lst"="Error text for server group 1";
"$pwd\servers\servers_2.lst"="Error text for server group 2";
"$pwd\servers\servers_3.lst"="Error text for server group 3";
}
$startupErrors = @{}
$groups.keys | %{
$key = $_
gc $key | %{
$startupErrors[$_] = Get-ChildItem -Path \\$_\$LOG_PATH -Include StartupError.log -Recurse | Select-String -notmatch $groups["$key"]
}
}
Basically, using an HashTable to associate the search text and the server group. Also, I have given only the refactoring solution, but the Get-ChildItem and Select-String may not be doing what you want to. | unknown | |
d9276 | train | In HTML, button is not a "self-closing" tag, therefore, IE is actually doing it correctly. Just appending the /> to a tag does not automatically close it, as it does in XML. You need to do this:
<button id="fourth_button" type="button"><span>Button Text</span></button> | unknown | |
d9277 | train | You can create a wrapper script as I done that:
wkhtmltoimage:
xvfb-run --server-args="-screen 0, 1024x680x24" wkhtmltoimage.bin -q --use-xserver $*
where wkhtmltoimage.bin is original binary. | unknown | |
d9278 | train | return self.nameList.count;
which is the value of nameList.count when tableview numberOfRowsInSection is called? Try to set it to a fixed value just to test. Maybe you are not setting namelist properly.
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
return 1;
} | unknown | |
d9279 | train | You could use Object#public_send method:
def conveyqnces_nb(symb)
User.public_send(symb).length
end
or, if you would like, Object#send might be suitable. The difference is that send allows to call private and protected methods, while public_send acts as typical method call - it raises an error if method is private or protected and the caller object isn't allowed to call it.
A: Use the #send instance method
u.send(:symb) | unknown | |
d9280 | train | For positive, you can try this logic:
AVG(CASE WHEN scores#>>'{medic,social,total}' in ('high', 'medium')
THEN 100.0
WHEN scores#>>'{medic,social,total}' in ('low')
THEN 0.0
END) OVER (ORDER BY date(survey_results.created_at) as positive
(And similar logic for negative.)
I think this encapsulates your logic, except it returns NULL rather than 0 when there are no matches. If that is a problem, you can use COALESCE(). | unknown | |
d9281 | train | You could use UPDATE...
UPDATE tbl
SET col1 = newCol1,
col2 = newCol2
WHERE etc = etc
And If you want to insert updated row to another table you could use TRIGGER AFTER UPDATE for that.
CREATE TRIGGER TriggerName ON Tbl
AFTER UPDATE
AS
INSERT INTO Log (Col1, Col2)
SELECT Col1, Col2
FROM deleted | unknown | |
d9282 | train | Akhila, I would recommend that you use the content prop instead of the text prop for your Dropdown.Item that you are rendering from your memberOptions array. The text prop specifically expects a string. The content prop will accept anything, including other React components or nodes. So instead of returning text as a string, you could do something like this for content, maybe as a separate class method on your component:
const renderItemContent = (member) => {
const {
email,
name,
} = member.user;
const emailStyle = {
color : '#333',
fontSize : '.875em',
}
return(
<React.Fragment>
{name}
{email &&
<div style={emailStyle}>{email}</div>
}
</React.Fragment>
)
}
Then set content: this.renderItemContent(member) on your memberOptionsArray. | unknown | |
d9283 | train | Right Click To the class -> Run As-> JAVA Application
A: You can change the default behavior by going to Window > Preferences > Run/Debug > Launching and in the 'Launch Operation' section, select the radio button for Launch the selected resource or active editor and then select Launch the associated project underneath.
A: Firs of all make sure that your program has a main class.
Then click in the black button next to the run button -> Run As-> JAVA Application.
If this doesn work make sure you have all this properly set up.
Run Configurations | unknown | |
d9284 | train | *
*Spelling getElementById
*Move getting the input value inside the function that needs it
*you need a new cell each time, otherwise you just move the cell
you might want a new row too for each input
const inputField = document.getElementById("input");
const result_row = document.getElementById("results");
function insert_result() {
let input = inputField.value;
let new_row = document.createElement("td");
new_row.append(input);
result_row.appendChild(new_row);
}
<form>
<label for="img_tag"> What is the above picture? </label><br>
<input type="text" name="What is the above picture?" id="input">
<input type="button" id="submit-button" value="Submit" onclick="insert_result()">
</form>
<div id="result_table">
<table>
<tr>
<th> Image </th>
<th> Tag </th>
</tr>
<tr id="results">
</tr>
</table>
</div> | unknown | |
d9285 | train | The only way to open links in new tabs is by simulating key-board shortcuts. The following hold true in FFX, Chrome & IE
*
*Ctrl+t will open a blank new tab, and switch focus to it.
*Holding Ctrl, then clicking the link will open the link in a new tab but leave focus on the existing tab.
*Holding Ctrl AND Shift, then clicking will open the link in a new tab AND move focus onto the new tab.
*Ctrl+w will close the current tab and switch focus to the last open tab (although note that Ctrl+W i.e. Ctrl+Shift+w will close ALL tabs!)
Selenium doesn't (currently) have any concept of tabs within a browser window, so in order to open the tab and then test it you HAVE to use option 3.
Try something like this:
WebDriver driver = new ChromeDriver();
driver.get("http://yahoo.com");
((JavascriptExecutor)driver).executeScript("window.open()");
ArrayList<String> tabs = new ArrayList<String>(driver.getWindowHandles());
driver.switchTo().window(tabs.get(1));
driver.get("http://google.com");
P.S
Look at here for this bug -> https://github.com/SeleniumHQ/selenium/issues/5462
A: Why not use JavaScriptExecutor to open a new window and switch to it?
Now sure about the Java syntax, but in Protractor it may be something like this
browser.executeScript('window.open()').then(function () {
browser.getAllWindowHandles().then(function (handles) {
var secondWindow = handles[1];
browser.ignoreSynchronization = true;
browser.switchTo().window(secondWindow).then(function () {
browser.get('https://google.com');
});
});
});
A: String baseUrl = "http://www.google.co.uk/";
driver.get(baseUrl);
((JavascriptExecutor) driver).executeScript("window.open()");
Set<String> tabs = new HashSet<String>();
tabs = driver.getWindowHandles();
List<String> li = new ArrayList<String>(tabs);
driver.switchTo().window(li.get(1));
driver.get("https://www.fb.com"); | unknown | |
d9286 | train | First of all DRF docs say that only text-based fields can be used as search field so if timeframe is DateField it will most likely not work.
The SearchFilter class will only be applied if the view has a
search_fields attribute set. The search_fields attribute should
be a list of names of text type fields on the model,
such as CharField or TextField.
You can try creating custom SearchFilter class and override filter_querysetmethod to use and instead of or operator.
See DRF source code
https://github.com/encode/django-rest-framework/blob/86673a337a4fe8861c090b4532379b97e3921fef/rest_framework/filters.py#L123 | unknown | |
d9287 | train | Angular does not use ? in routing, instead it uses ; for multiple parameters
The optional route parameters are not separated by "?" and "&" as they
would be in the URL query string. They are separated by semicolons ";"
This is matrix URL notation—something you may not have seen before.
In your case, you are passing a single parameter. So Your route should be similar to
{path: ':param1' component: AppComponent}
Then you would be able to access the param1 using the code written in ngOnInit method. The code should be as shown below
ngOnInit() {
this.activatedRoute.params.subscribe(params=>console.log(params['param1']));
}
If you are planning to use query parameters, then you should use queryParams from ActivateRoute and url should be http://localhost:4216/?param1=en and use below code to access data
ngOnInit(){
this.activatedRoute.queryParams.subscribe(params=>console.log(params['param1']));
}
Also including a working example
A: It was a silly mistake.
Th code was fine but What I was doing was using ng onit to get data from purl string and constructor to inject translation service and was trying to exchange data between thhose two .Since I could not get any way/getting error ,I added both in constructor and removed onit for getting url parameter(I know Silly!)
import {Component} from '@angular/core';
import {TranslateService} from '@ngx-translate/core';
import { ActivatedRoute } from '@angular/router';
import { AppGlobals } from '../app/app.global';;
@Component({
selector: 'app-root',
template: `
<div>
<h2>{{ 'HOME.TITLE' | translate }}</h2>
<label>
{{ 'HOME.SELECT' | translate }}
</label>
</div>
`,
providers: [ AppGlobals]
})
export class AppComponent {
param1: string;
constructor(public translate: TranslateService,private route: ActivatedRoute,private _global: AppGlobals) {
console.log('Called Constructor');
this.route.queryParams.subscribe(params => {
this.param1 = params['param1'];
translate.use(this.param1);
});
// this.route.queryParams.subscribe(params => {
// this._global.id = params['id'];
//console.log('hello'+this._global.id);
//})
//translate.addLangs(['en', 'fr']);
//translate.setDefaultLang('en');
//const browserLang = translate.getBrowserLang();
//translate.use(browserLang.match(/en|fr/) ? browserLang : 'en');
//const browserLang = translate.getBrowserLang();
console.log('hello'+this._global.id);
translate.use(this._global.id);
}
ngOnInit() {
}
} | unknown | |
d9288 | train | The token is a placeholder of a pending charge, it does not know how much you are going to charge yet. Once you are ready to charge the card an api request will be sent to Stripe along with the token. The concern about the amount deals with relying on POST data from a form that can be manipulated by the customer.
A: Its up to you to set the charge amount. For example a hotel could authorize $100 to spend the night but then at check out discover that you used the minibar and then charge $150. Or the auto calculated shipping is off so when you actually purchase the shipping its $5 less and you decide to charge $5 less than your auth.
What you should be doing is calculating the amount to charge the customer, save it via a shopping cart like function in your DB (or serverside somehow) sending the checkout form to the customer then using the previously calculated amount run the auth then the charge.
Form data can easily be changed by the end user. Just open the page and right click (in chrome) and click inspect element. You can then arbitrarily change form data. So if you were using that, the user could set the price to $.01 for your $1,000.00 product.
The propose of tokenization in the PCI world is to keep sensitive data off your servers. Otherwise you would collect the PCI data yourself then send the amount off to the processor along with the PCI data. By not ever having the sensitive data touch your systems you save a ton of money and headache in PCI compliance. See this 115 page document: https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1.pdf
Hope that helps, Please comment and I'll try to help further if it doesn't. | unknown | |
d9289 | train | The approach offered by asmeurer seems to be applicable: see How to solve matrix equation with sympy?.
First, declare A, B and C to be non-commutative variables and obtain a solution to the equation. Second, re-define C and A as the desired arrays and then apply the formula to these arrays.
>>> from sympy import *
>>> A,B,C = symbols('A B C', commutative=False)
>>> solve(A+2*B-C,B)
[(-A + C)/2]
>>> A = Matrix([2,2,1,5])
>>> C = Matrix([1,1,1,1])
>>> A = A.reshape(2,2)
>>> C = C.reshape(2,2)
>>> (-A + C)/2
Matrix([
[-1/2, -1/2],
[ 0, -2]])
To answer the question in the comments: Define matrix C to be the zero matrix on the right of the equation and proceed as above.
>>> A,B,C = symbols('A B C', commutative=False)
>>> solve(2*A+B-C,A)
[(-B + C)/2]
>>> B = Matrix([1,4,3,5])
>>> B = B.reshape(2,2)
>>> C = Matrix([0,0,0,0])
>>> C = C.reshape(2,2)
>>> (-B + C)/2
Matrix([
[-1/2, -2],
[-3/2, -5/2]]) | unknown | |
d9290 | train | for only derived classes.. use protected
Protected means that access is limited to the containing class or types derived from the containing class. | unknown | |
d9291 | train | As Uli says in the comment, what can and cannot be a modified is a constraint coming from Xorg. It cannot be Tab.
But. With awful.keygrabber, you can create a keybinding on modkey+Tab, then from that callback, start the keygrabber and intercept the number keys from there. When the keygrabber detect Tab is released, then stop it. There is multiple built-in methods and property to make this rather easy.
See https://awesomewm.org/apidoc/core_components/awful.keygrabber.html for more details.
Just take the Alt+Tab example (link above) and modify it to fit your use case. | unknown | |
d9292 | train | It's not secure. All client side validations are insecure by design. Pattern validation passwords are visible in the source code. Having said that, a single password for multiple users is also insecure. All it takes is one user compromise to invalidate the whole thing.
If you need a fully secure solution, create your own form with HtmlService with oauth authorization and Google identity. | unknown | |
d9293 | train | Just upgrade your react router version from 0.0.13 to 1.0.0-rc1(beta) coz the below code will only work for 1.0.0 beta version which is mentioned in change log.
React.render(<Router><Route path="/" component={App}>
<IndexRoute component={Index}/>
<Route path="about" component={About}/>
</Route>
</Router>, document.getElementById('app'));
After upgrading version you have to declare router properly like below.
var React = require('react');
var ReactRouter = require('react-router');
var Router = ReactRouter .Router;
var Route = ReactRouter .Route;
Now router is defined properly. This worked for me..give a try. | unknown | |
d9294 | train | This isn't related to the access token in any way. It is generated before configuring the embedding process. To embed the report in phone view, you must specify MobilePortrait layout type in the embed configuration, i.e. something like this:
var config = {
.....
settings: {
filterPaneEnabled: true,
navContentPaneEnabled: true,
layoutType: models.LayoutType.MobilePortrait <-- THIS ONE
}
};
If you omit layoutType, it will be shown in the landscape view (i.e. like in the desktop). For more information about the configuration see Embed Configuration Details, and for embedding in general you should start from Embedding Basics. | unknown | |
d9295 | train | It's not complaining about LocalWebUser. It's complaining about LocalWebUser$1 which is an anonymous inner class within LocalWebUser. Look through the code in LocalWebUser for something like this:
Object something = new Something() { .... };
That's an anonymous inner class. If that isn't serializable and a reference to that is being leaked into an object stored in the session, then there's your problem.
A: One possibility is that the @Value annotation modifies the type of the userId field to some other, non serializable type, through bytecode manipulation.
So when you call userSession.setUserId(webUser.getUserId()) it copies that non serializable type to the UserSession.
However I could not find any reference about the @Value annotation to support that, so this is just a hunch.
You could probably validate this by inspecting the type of the userId field at runtime with a debugger. | unknown | |
d9296 | train | I suppose searchfile is a file you opened earlier, e. g. searchfile = open('.\someotherfile', 'r').
In this case, your construction doesn't work, because a file is an iterable which can be iterated over only once and then it is exhausted.
You have two options here:
*
*Reopen the file on every outer loop fun
*Read the file's contents into a list an iterate over this list as often as you need to.
What happens in your code?
At the start of the nested for loops, both your files are open and can be read from.
Whenever the first inner loop run is over, searchfile is at its end. When the outer loop now comes to process its second entry, the inner loop is like an empty loop, as it just cannot produce more entries.
A: The in and with keywords
You don't need two nested for loops!
Instead use the more pythonic in keyword like so:
with open("./search_file.txt", mode="r") as search_file:
lines_to_search = search_file.readlines()
with open("./file_to_search.txt", mode="r") as file_to_search:
for line_number, line in enumerate(file_to_search, start=1):
if line in lines_to_search:
print(f"Match at line {line_number}: {line}")
Pro tip: Open your files using the with statement to automatically close them.
A: You need to set searchfile current position to the beginning for every infile iteration. you can use seek function for this.
searchfile = open('.\sometext.txt', 'r')
infile = open('.\somefile', 'r')
for line1 in infile:
searchfile.seek(0,0)
for line2 in searchfile:
print line2
searchfile.close()
infile.close()
A: We need a bit more details about what are your objects in this code. But you probably would like to do:
infile = open('.\somefile', 'r')
for line1 in infile:
for line2 in line1:
print line2
searchfile.close()
infile.close()
If your infile is a list of lists - That are the cases where a nested for loop would make sense. | unknown | |
d9297 | train | From the documentation:
Return from applicationDidEnterBackground(_:) as quickly as possible. Your implementation of this method has approximately five seconds to perform any tasks and return. If the method doesn’t return before time runs out, your app is terminated and purged from memory.
If you need additional time to perform any final tasks, request additional execution time from the system by calling beginBackgroundTask(expirationHandler:). Call beginBackgroundTask(expirationHandler:) as early as possible. Because the system needs time to process your request, there’s a chance that the system might suspend your app before that task assertion is granted. For example, don’t call beginBackgroundTask(expirationHandler:) at the very end of your applicationDidEnterBackground(_:) method and expect your app to continue running.
If the long-running operation you describe above is on the main thread and it takes longer than 5 seconds to finish after your application heads to the background, your application will be killed. The main thread will be blocked and you won't have a chance to return from -applicationDidEnterBackground: in time.
If your task is running on a background thread (and it really should be, if it's taking long to execute), that thread appears to be paused if the application returns from -applicationDidEnterBackground: (according to the discussion in this answer). It will be resumed when the application is brought back to the foreground.
However, in the latter case you should still be prepared for your application to be terminated at any time while it's in the background by cleaning things up on your way to the background.
A: If you are doing some operation which might consume time and you don't want to kill it then you can extend the time for your operation by executing in UIBackground Task i
{
UIBackgroundTaskIdentifier taskId = 0;
taskId = [application beginBackgroundTaskWithExpirationHandler:^{
taskId = UIBackgroundTaskInvalid;
}];
// Execute long process. This process will have 10 mins even if your app goes in background mode.
}
The block argument called "handler" is what will happen when the background task expire (10min).
Here is a link to the documentation
A: Like mentioned above, there are a few cases where your app runs in the background and apple can allow or deny depending on what you are doing.
https://developer.apple.com/library/ios/documentation/iphone/conceptual/iphoneosprogrammingguide/ManagingYourApplicationsFlow/ManagingYourApplicationsFlow.html
More importantly if you do fit into one of these categories your app refresh rate is determined by an apple algorithm that takes into consideration your app usage on that device vs other apps. If your app is used more often then it gets more background time allotted. This is just one variable but you get the idea that background time allocation varies app to app and not under your control. | unknown | |
d9298 | train | You can do that server side by just placing <asp:Image ImageUrl="some.gif" /> tags in your ASP.NET code. The browser will show them when the page is loaded.
A: You don't need to use JQuery, use CSS background-image property for every <option>. | unknown | |
d9299 | train | Thanx guys for your help, all i needed was to make a Service
the onReceive method will be
@Override
public void onReceive(Context context, Intent intent) {
if (Intent.ACTION_BOOT_COMPLETED.equals(intent.getAction()))
{
Intent i= new Intent(context, MyService.class);
context.startService(i);
}
}
MyService class
public class MyService extends Service implements LocationListener {
@Override
public IBinder onBind(Intent intent) {
// TODO Auto-generated method stub
return null;
}
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
//TODO do something useful
LocationManager LM2=(LocationManager) this.getSystemService(Context.LOCATION_SERVICE);
LM2.requestLocationUpdates("gps",5000, 0, this);
return Service.START_STICKY;
}
@Override
public void onLocationChanged(Location location) {
// TODO Auto-generated method stub
}
@Override
public void onProviderDisabled(String provider) {
// TODO Auto-generated method stub
}
@Override
public void onProviderEnabled(String provider) {
// TODO Auto-generated method stub
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
// TODO Auto-generated method stub
}
}
and i need to register my service in the manifest.xml
<service
android:name="com.my.package.MyService"
android:icon="@drawable/icon"
android:label="Service name"
>
</service>
A:
When the the onReceive() called the GPS start for 2-3 second and gone, why?
Because your process was terminated. | unknown | |
d9300 | train | Here's what I created a few months ago to copy all the formulas in one worksheet to another.
Note: I am having a problem where some formulas using a Name are not correctly copying the Name because something thinks the Name(i.e. =QF00) is a reference and will change it with AutoFill. Will update when I figure it out.
cx.IXLWorksheet out_buff
cx.IXLRange src_range
cx.IXLCells tRows = src_range.CellsUsed(x => x.FormulaA1.Length > 0);
foreach (cx.IXLCell v in tRows)
{
cell_address = v.Address.ToString();
out_buff.Range(cell_address).FormulaA1 = v.FormulaA1;
} | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.