_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d5901 | train | Here's a vectorized way -
n = len(numbers)
fwd = numbers.cumsum()/np.arange(1,n+1)
bwd = (numbers[::-1].cumsum()[::-1])/np.arange(n,0,-1)
k_out = np.r_[np.nan,fwd[:-1]]/bwd
Optimizing a bit further with one cumsum, it would be -
n = len(numbers)
r = np.arange(1,n+1)
c = numbers.cumsum()
fwd = c/r
b = c[-1]-c
bwd = np.r_[1,b[:-1]]/r[::-1]
k_out = np.r_[np.nan,fwd[:-1]]/bwd
A: I spent some time and there is a simple and universal solution: numpy.vectorize with excluded parameter, where vector designated to be split must be excluded from vectorisation. The example still uses np.mean but can be replaced with any function:
def split_mean(vect,i):
return np.mean(vect[:i])/np.mean(vect[i:])
v_split_mean = np.vectorize(split_mean)
v_split_mean.excluded.add(0)
numbers = np.random.rand(30)
indexes = np.arange(*numbers.shape)
v_split_mean(numbers,indexes) | unknown | |
d5902 | train | From your XML and the error, I believe it's because you are adding a default namespace after adding an element with no namespace declaration, so you're effectively creating an element and then changing its namespace.
Try the following code - it stops the error when I test it locally just for the XML I think you're trying to get:
XmlWriter writer = XmlWriter.Create(fileName);
writer.WriteStartDocument(true);
writer.WriteStartElement("PrincetonStorageRequest", "http://example.com/abc/dss/pct/v1.0");
writer.WriteAttributeString("xmlns", "http://example.com/abc/dss/pct/v1.0");
writer.WriteAttributeString("requestId", name);
writer.WriteAttributeString("timestampUtc", "2015-02-19T09:25:30.7138903Z");
writer.WriteStartElement("StorageItems");
So when I create the PrincetonStorageRequest element I am specifying a namespace URI.
Edit: Just to check, this is the XML that gets created but I did have to add the code to write the end elements:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<PrincetonStorageRequest xmlns="http://example.com/abc/dss/pct/v1.0" requestId="RequestOut_MAG_Test_02" timestampUtc="2015-02-19T09:25:30.7138903Z">
<StorageItems/> | unknown | |
d5903 | train | THE CORRECT WAY ************************ THE CORRECT WAY
while($rows[] = mysqli_fetch_assoc($result));
array_pop($rows); // pop the last row off, which is an empty row
A: Very often this is done in a while loop:
$types = array();
while(($row = mysql_fetch_assoc($result))) {
$types[] = $row['type'];
}
Have a look at the examples in the documentation.
The mysql_fetch_* methods will always get the next element of the result set:
Returns an array of strings that corresponds to the fetched row, or FALSE if there are no more rows.
That is why the while loops works. If there aren't any rows anymore $row will be false and the while loop exists.
It only seems that mysql_fetch_array gets more than one row, because by default it gets the result as normal and as associative value:
By using MYSQL_BOTH (default), you'll get an array with both associative and number indices.
Your example shows it best, you get the same value 18 and you can access it via $v[0] or $v['type'].
A: You do need to iterate through...
$typeArray = array();
$query = "select * from whatever";
$result = mysql_query($query);
if ($result) {
while ($record = mysql_fetch_array($results)) $typeArray[] = $record['type'];
}
A: $type_array = array();
while($row = mysql_fetch_assoc($result)) {
$type_array[] = $row['type'];
}
A: while($row = mysql_fetch_assoc($result)) {
echo $row['type'];
}
A: You could also make life easier using a wrapper, e.g. with ADODb:
$myarray=$db->GetCol("SELECT type FROM cars ".
"WHERE owner=? and selling=0",
array($_SESSION['username']));
A good wrapper will do all your escaping for you too, making things easier to read.
A: You may want to go look at the SQL Injection article on Wikipedia. Look under the "Hexadecimal Conversion" part to find a small function to do your SQL commands and return an array with the information in it.
https://en.wikipedia.org/wiki/SQL_injection
I wrote the dosql() function because I got tired of having my SQL commands executing all over the place, forgetting to check for errors, and being able to log all of my commands to a log file for later viewing if need be. The routine is free for whoever wants to use it for whatever purpose. I actually have expanded on the function a bit because I wanted it to do more but this basic function is a good starting point for getting the output back from an SQL call. | unknown | |
d5904 | train | I fixed the problem by adding a shadow casting pass. (for some reason unity has no documentation on these.) I also changed the fallback to "VertexLit" but I don't know if that had any effect. I still don't know why the shadow shapes were different in the editor than in the build though.
//From https://answers.unity.com/questions/1003169/shadow-caster-shader.html
Pass{
Name "ShadowCaster"
Tags { "LightMode" = "ShadowCaster" }
Fog {Mode Off}
ZWrite On ZTest Less Cull Off
Offset 1, 1
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma multi_compile_shadowcaster
#include "UnityCG.cginc"
sampler2D _MainTex;
struct v2f
{
V2F_SHADOW_CASTER;
float2 uv : TEXCOORD1;
};
v2f vert(appdata_full v )
{
v2f o;
o.uv = v.texcoord;
TRANSFER_SHADOW_CASTER(o)
return o;
}
float4 frag( v2f i ) : COLOR
{
fixed4 c = tex2D (_MainTex, i.uv);
clip(c.a - 0.9);
SHADOW_CASTER_FRAGMENT(i)
}
ENDCG
} | unknown | |
d5905 | train | Try this pattern:
$("#someAnimatedGif").show();
$.getJSON("url", function (data) {
$("#someAnimatedGif").hide();
});
The animated gif will initially be hidden, and you can use JQuery to hide/show it.
The key is to show it right before you execute the Ajax call, and hide it again when the callback returns. | unknown | |
d5906 | train | Use these options (In kotlin) -
GlideApp.with(mContext)
.apply(getRectangleRequestOptions(true))
.load(url)
.thumbnail(0.5f)
.into(layout.bannerAdapterImg)
Where getSquareRequestOptions is -
fun getSquareRequestOptions(isCenterCrop:Boolean=true): RequestOptions {
return RequestOptions().also {
it.placeholder(R.drawable.ic_placeholder)
it.error(R.drawable.ic_err_image)
it.override(200, 200) // override size as you need
it.diskCacheStrategy(DiskCacheStrategy.ALL) //If your images are always same
it.format(DecodeFormat.PREFER_RGB_565) // the decode format - this will not use alpha at all
if(isCenterCrop)
it.centerCrop()
else
it.fitCenter()
}
}
*For java code just update getSquareRequestOptions function in java.
This is the best glide can do. If it still takes time than compress images from server side. | unknown | |
d5907 | train | I think you confused some terms in translation to English, and what you are actually looking for is to create an <a href="">Link</a> that also passes a variable.
You can do this very simply, by:
@Html.ActionLink("ویرایش", "Index", "StepOfIdea", new { id = item.Id }, null)
This will create the HTML:
<a href="http://example.com/StepOfIdea/Index/5">ویرایش</a> | unknown | |
d5908 | train | I diggged around a bit and the ancestor function seem to traverse the RClass.super c-struct member, the same as the method looup does. So when I do a
class OtherClass end
obj = OtherClass.new
obj.class.singleton_class.singleton_class.ancestors =>
[#<Class:#<Class:OtherClass>>, \
#<Class:#<Class:Object>>, \
#<Class:#<Class:BasicObject>>, \
#<Class:Class>, \
#<Class:Module>, \
#<Class:Object>, \
#<Class:BasicObject>, \
Class, \
Module, \
Object, \
Kernel, \
BasicObject]
BasicObject
^ +---------+ +--------+
| | | | |
Kernel | #<Class:BasicObject> | #<Class:#<Class:BasicObject>>
^ | ^ | ^
| | | | |
Object | #<Class:Object> | #<Class:#<Class:Object>>
^ | ^ | ^
| | | | |
+-------+ | +--------+ | |
| | | | |
Module | #<Class:Module> | |
^ | ^ | |
| | | | |
Class | #<Class:Class> | |
^ | ^ | |
+---+ +--------+ |
|
obj--->OtherClass --->#<Class:OtherClass>--->#<Class:#<Class:OtherClass>>
That means the vertical arrows in the diagram can be seen as the RClass.super c-member traversal. The horizontal arrows on the other hand should be related to RBasic.klass, however the Ruby code seems asymetric.
...
|
obj---> OtherClass
When a singleton class is created the former RBasic.klass will get the RClass.super of the new singleton class.
... ...
Object #<Class:Object>
^ ^
| |
OtherClass |
^ |
| |
obj--->#<Class:#OtherClass:0x...> ->#<Class:OtherClass> -+
^-+
and going one step futher a singleton of a singleton then looks like:
... ... ...
Object #<Class:Object> #Class<#<Class:Object>>
^ ^ ^
| | |
OtherClass | |
^ | |
| | |
obj-->#<Class:#OtherClass:0x...>-->#<Class:OtherClass>-->#<Class:#<Class:OtherClass>>-+
^-+
The meaning/usage of a singleton class is understandable, however there meaning/usage of the metaclasses is a bit esoteric. | unknown | |
d5909 | train | In my apps, I use CocoaHTTPServer to get local info into and off of the phone. You run the server and out-of-the-box, it indexes all the files in the documents directory.
To do what you want, you will need to edit the code to return some other kind of data format (xml probably is the easiest) the call this from inside your app to get that data. CocoaHTTPServer easily take POST right out of the box too, so you can post an xml response as well.
After thinking about it, CocoaHTTPServer is best run on the computer side behind the scenes. the iphone can then send info to the computer where handling the code should be easier and you have more options.
A: On top of this you will want to look into Bonjour, it will allow the computer and the iphone to discover each other without too much difficulty. (ie by advertising their info on the network) | unknown | |
d5910 | train | This Exception is usualy thrown, if you are using the network on the main thread.
Please use Async Tasks. | unknown | |
d5911 | train | Two things:
*
*Use chmod straight away instead of a find and exec, like so: chmod 755 #{current_path}
*Check if the server_owner user has permission to current_path. If not, then use sudo like so: sudo "chmod 755 #{current_path}" | unknown | |
d5912 | train | See: http://dev.mysql.com/doc/refman/5.0/en/charset-binary-op.html
SELECT * FROM accounts WHERE BINARY username = '$qrystring'";
And also do what halfdan said! ;)
A: Please sanitize your $qrystring variable before passing it unfiltered to the database. (See SQL injection).
To make a case sensitive match you will have to use COLLATE on your column:
username = COLLATE latin1_general_cs = '$querystring'
From the manual:
Simple comparison operations (>=, >, =, <, <=, sorting, and grouping) are based on each character's “sort value.” Characters with the same sort value are treated as the same character. For example, if “e” and “é” have the same sort value in a given collation, they compare as equal. | unknown | |
d5913 | train | As documented under 32-bit and 64-bit Application Data in the Registry:
The KEY_WOW64_64KEY and KEY_WOW64_32KEY flags enable explicit access to the 64-bit registry view and the 32-bit view, respectively. For more information, see Accessing an Alternate Registry View.
The latter link explains that
These flags can be specified in the samDesired parameter of the following registry functions:
*
*RegCreateKeyEx
*RegDeleteKeyEx
*RegOpenKeyEx
The following code accomplishes what you are asking for:
// Get a handle to the required key
HKEY hKey;
if(RegOpenKeyEx(HKEY_LOCAL_MACHINE, "Software\\MyKey", 0, KEY_READ | KEY_WOW64_32KEY, &hKey) == ERROR_SUCCESS)
{
// ...
} | unknown | |
d5914 | train | The important fact is that user level threads (or green threads) are handled by the programming language and are not exposed to the operating system. In ULT the threads are entirely "hidden within the python runtime". This has the advantage that the programming processing envinroment has the full control over the threads. To the OS the program looks like a single thread and thus only runs on one core at a time.
On the other hand, kernel threads are handled by the OS. They can run on different cores at once and benefit from this speed improvement. The downside is that things like "thread safe memory access" need to be handled by the threads themselves and there is no "outside language scheduler" (as in user level threads) which can guarantee the thread safety. That's why for instance in python there's a "global interpreter lock" which guarantees, that two kernel threads never run at the same time.
That's just a short summary. If you're interested to know more, look up "global interpreter lock" (python, ruby) or see this answer | unknown | |
d5915 | train | Are all your view controllers returning YES to shouldAutorotateToInterfaceOrientation: ? If so, I suggest to pass the interface orientation messages from the parent to the children viewControllers, as you suggested.
I have been doing so before and had no problems with that approach so far. | unknown | |
d5916 | train | returning and passing 2 one dimensional arrays
In C++, you can only return a single value. You cannot return multiple values, and the value that you return cannot be an array.
im not very comfortable with "struts" [sic]
I assume you mean structs. Well, now is the time to become comfortable, because a struct (also known as class) is great way to combine multiple values - even arrays - into a single object that can be returned.
Another option is to pass the function multiple references (or iterators or pointers) to objects that the function can modify instead of returning them. | unknown | |
d5917 | train | Your first dynamic SQL query also wants to access @FeatureID, but you're not passing it.
So move:
SET @ParmDefinition = N'@FeatureID int '
Up to the top of the proc and then call
EXECUTE sp_executesql @Query,@ParmDefinition,@FeatureID = @FeatureID
for both pieces of dynamic SQL.
For the general strategy - it would be far better if you made the stored proc accept a table-valued parameter for @Users and then you wouldn't need to use dynamic SQL at all.
Actually, on second reading, your second query also references @CreatedUserID, so you'll need to pass that across as a parameter to the second query. So you need to change the parameter definition between the two, or just add it to the parameters and pass it (pointlessly) to the first query. | unknown | |
d5918 | train | So looking over your code and information I would try a couple things, first verifying your access token. I have used this as a reference. Using a browser and a simple html page (see below) I am able to acquire the token and verify it. You will need to need to fill out the values as specified on that page.
Make sure that your token is correct first. I did this by using that userinfo address in a browser.
Next you inapp purchase html location needs ?access_token={access_token} at the end after the
If you notice the section "Accessing the API" in the above page about authorization, you need to add this.
Here is the webpage that I have used to help get the token for testing. NOTE that this is for testing and will not work as final solution because the authorization token is only good for a short time. You will need to make sure that the code that gets that access token functions correctly.
In addition again refer to the page above to fill this out for your product. The only thing that is dynamic in this is the "code" value which comes from the instructions on that page.
Hope some of this can help you...
<form action=" https://accounts.google.com/o/oauth2/token" method="post">
<input name="grant_type" type="hidden" value="authorization_code" />
<input name="code" type="hidden" value="the code from the previous step"/>
<input name="client_id" type="hidden" value="the client ID token created in the APIs Console"/>
<input name="client_secret" type="hidden" value="the client secret corresponding to the client ID"/>
<input name="redirect_uri" type="hidden" value="the URI registered with the client ID"/>
<input type="submit" />
</form> | unknown | |
d5919 | train | As @Miff has written bars are generally not useful on a log scale. With barplots, we compare the height of the bars to one another. To do this, we need a fixed point from which to compare, usually 0, but log(0) is negative infinity.
So, I would strongly suggest that you consider using geom_point() instead of geom_bar(). I.e.,
ggplot(df, aes(x=id, y=ymean , color=var)) +
geom_point(position=position_dodge(.7))+
scale_y_log10("y",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x)))+
geom_errorbar(aes(ymin=ymin,ymax=ymax),
size=.25,
width=.07,
position=position_dodge(.7))+
theme_bw()
If you really, really want bars, then you should use geom_rect instead of geom_bar and set your own baseline. That is, the baseline for geom_bar is zero but you will have to invent a new baseline in a log scale. Your Plot 1 seems to use 10^-7.
This can be accomplished with the following, but again, I consider this a really bad idea.
ggplot(df, aes(xmin=as.numeric(id)-.4,xmax=as.numeric(id)+.4, x=id, ymin=10E-7, ymax=ymean, fill=var)) +
geom_rect(position=position_dodge(.8))+
scale_y_log10("y",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x)))+
geom_errorbar(aes(ymin=ymin,ymax=ymax),
size=.25,
width=.07,
position=position_dodge(.8))+
theme_bw()
A: If you need bars flipped, maybe calculate your own log10(y), see example:
library(ggplot2)
library(dplyr)
# make your own log10
dfPlot <- df %>%
mutate(ymin = -log10(ymin),
ymax = -log10(ymax),
ymean = -log10(ymean))
# then plot
ggplot(dfPlot, aes(x = id, y = ymean, fill = var, group = var)) +
geom_bar(position = "dodge", stat = "identity",
width = 0.7,
size = 0.9)+
geom_errorbar(aes(ymin = ymin, ymax = ymax),
size = 0.25,
width = 0.07,
position = position_dodge(0.7)) +
scale_y_continuous(name = expression(-log[10](italic(ymean)))) +
theme_bw()
A: Firstly, don't do it! The help file from ?geom_bar says:
A bar chart uses height to represent a value, and so the base of the
bar must always be shown to produce a valid visual comparison. Naomi
Robbins has a nice article on this topic. This is why it doesn't make
sense to use a log-scaled y axis with a bar chart.
To give a concrete example, the following is a way of producing the graph you want, but a larger k will also be correct but produce a different plot visually.
k<- 10000
ggplot(df, aes(x=id, y=ymean*k , fill=var, group=var)) +
geom_bar(position="dodge", stat="identity",
width = 0.7,
size=.9)+
geom_errorbar(aes(ymin=ymin*k,ymax=ymax*k),
size=.25,
width=.07,
position=position_dodge(.7))+
theme_bw() + scale_y_log10(labels=function(x)x/k)
k=1e4
k=1e6 | unknown | |
d5920 | train | Try this:
Connection_String = 'Driver={Oracle in OraClient11g_home1};DBQ=MyDB;Uid=MyUser;Pwd=MyPassword;' | unknown | |
d5921 | train | As advertised, increasing query.max-memory-per-node, and also by necessity the -Xmx property, indeed cannot be achieved on EMR until after Presto has already started with the default options. To increase these, the jvm.config and config.properties found in /etc/presto/conf/ have to be changed, and the Presto server restarted on each node (core and coordinator).
One can do this with a bootstrap script using commands like
sudo sed -i "s/query.max-memory-per-node=.*GB/query.max-memory-per-node=20GB/g" /etc/presto/conf/config.properties
sudo restart presto-server
and similarly for /etc/presto/jvm.conf. The only caveats are that one needs to include the logic in the bootstrap action to execute only after Presto has been installed, and that the server on the coordinating node needs to be restarted last (and possibly with different settings if the master node's instance type is different than the core nodes).
You might also need to change resources.reserved-system-memory from the default by specifying a value for it in config.properties. By default, this value is .4*(Xmx value), which is how much memory is claimed by Presto for the system pool. In my case, I was able to safely decrease this value and give more memory to each node for executing the query.
A: As a matter of fact, there are configuration classifications available for Presto in EMR. However, please note that these may vary depending on the EMR release version. For a complete list of the available configuration classifications per release version, please visit 1 (make sure to switch between the different tabs according to your desired release version). Specifically regarding to jvm.config properties, you will see in 2 that these are not currently configurable via configuration classifications. That being said, you can always edit the jvm.config file manually per your needs.
Amazon EMR 5.x Release Versions
1
Considerations with Presto on Amazon EMR - Some Presto Deployment Properties not Configurable:
2 | unknown | |
d5922 | train | I found two ways to go about this:
The first is based on this answer. Basically, you determine the number of pixels between the adjacent data-points and use it to set the marker size. The marker size in scatter is given as area.
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
# initialize a plot to determine the distance between the data points in pixel:
x = [1, 2, 3, 4, 2, 3, 3]
y = [0, 0, 0, 0, 1, 1, 2]
s = 0.0
points = ax.scatter(x,y,s=s,marker='s')
ax.axis([min(x)-1., max(x)+1., min(y)-1., max(y)+1.])
# retrieve the pixel information:
xy_pixels = ax.transData.transform(np.vstack([x,y]).T)
xpix, ypix = xy_pixels.T
# In matplotlib, 0,0 is the lower left corner, whereas it's usually the upper
# right for most image software, so we'll flip the y-coords
width, height = fig.canvas.get_width_height()
ypix = height - ypix
# this assumes that your data-points are equally spaced
s1 = xpix[1]-xpix[0]
points = ax.scatter(x,y,s=s1**2.,marker='s',edgecolors='none')
ax.axis([min(x)-1., max(x)+1., min(y)-1., max(y)+1.])
fig.savefig('test.png', dpi=fig.dpi)
The downside of this first approach is, that the symbols overlap. I wasn't able to find the flaw in the approach. I could manually tweak s1 to
s1 = xpix[1]-xpix[0] - 13.
to give better results, but I couldn't determine a logic behind the 13..
Hence, a second approach based on this answer. Here, individual squares are drawn on the plot and sized accordingly. In a way it's a manual scatter plot (a loop is used to construct the figure), so depending on the data-set it could take a while.
This approach uses patchesinstead of scatter, so be sure to include
from matplotlib.patches import Rectangle
Again, with the same data-points:
x = [1, 2, 3, 4, 2, 3, 3]
y = [0, 0, 0, 0, 1, 1, 2]
z = ['b', 'g', 'r', 'c', 'm', 'y', 'k'] # in your case, this is data
dx = [x[1]-x[0]]*len(x) # assuming equally spaced data-points
# you can use the colormap like this in your case:
# cmap = plt.cm.hot
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
ax.axis([min(x)-1., max(x)+1., min(y)-1., max(y)+1.])
for x, y, c, h in zip(x, y, z, dx):
ax.add_artist(Rectangle(xy=(x-h/2., y-h/2.),
color=c, # or, in your case: color=cmap(c)
width=h, height=h)) # Gives a square of area h*h
fig.savefig('test.png')
One comment on the Rectangle: The coordinates are the lower left corner, hence x-h/2.
This approach gives connected rectangles. If I looked closely at the output here, they still seemed to overlap by one pixel - again, I'm not sure this can be helped. | unknown | |
d5923 | train | You need to have a point shape that allows both fill and colour.
library(ggplot2)
cars %>%
ggplot() +
geom_point(
aes(x = speed, y = dist,
color= I(ifelse(dist >50, 'red', 'black')),
fill= I(ifelse(dist >50, 'pink', 'gray')),
),
shape = 21,
size = 4 # changing size so it's easy to visualise
)
To check point shapes that allow both fill and colour use help(points) and refer to the 'pch' values section | unknown | |
d5924 | train | The ApplyResources method uses reflection to find the properties which will be updated with the resource values:
property = value.GetType().GetProperty(name, bindingAttr);
Reflection is notoriously slow. Assign the resource values by hand to the properties (e.g using ResourceManager.GetString(...)). This is tedious to code, but should improve the performance.
A: I would grab Reflector and take a look at the ApplyResources method to see what it actually does.
I would also recommend profiling using JetBrains dotTrace 4 (currently in EAP but trials can be downloaded), as it can also show times spent inside system classes. This makes it much more transparent where the time is actually spent. For instance, you can find out whether the time is spent looking up keys in a dictionary, accessing files, etc.
You could also do a micro benchmark and measure the time it takes to look up X keys in a Y-sized dictionary of strings, with X being the number of localized resources on a particular form and Y being the total resource pool. It will at least give you an idea of how fast you could look up the resources if you were to cache them in a dictionary, which may help you decide whether it is worthwhile to write your own resource provider. | unknown | |
d5925 | train | You claim you only found this in the server logs and didn't encounter it during debugging. That means that between these lines:
if (permissions.Count() > 0)
{
var p = permissions.First();
Some other process or thread changed your database, so that the query didn't match any documents anymore.
This is caused by permissions holding a lazily evaluated resource, meaning that the query is only executed when you iterate it (which Count() and First()) do.
So in the Count(), the query is executed:
SELECT COUNT(*) ... WHERE ...
Which returns, at that moment, one row. Then the data is modified externally, causing the next query (at First()):
SELECT n1, n2, ... WHERE ...
To return zero rows, causing First() to throw.
Now for how to solve that, is up to you, and depends entirely on how you want to model this scenario. It means the second query was actually correct: at that moment, there were no more rows that fulfilled the query criteria. You could materialize the query once:
permissions = query.Where(...).ToList()
But that would mean your logic operates on stale data. The same would happen if you'd use FirstOrDefault():
var permissionToApply = permissions.FirstOrDefault();
if (permissionToApply != null)
{
// rest of your logic
}
So it's basically a lose-lose scenario. There's always the chance that you're operating on stale data, which means that the next code:
tdbctx.UserPermissions.SingleOrDefault(tup => tup.UserID == p.UserID);
Would throw as well. So every time you query the database, you'll have to write the code in such a way that it can handle the records not being present anymore. | unknown | |
d5926 | train | I would suggest putting the components you would like to iterate one step deeper in the structure and also make sure every component has similar 'status' properties to check (which isn't the case in your json example) like so:
{
"host": {
"serial_number": "55555",
"status": "GREEN",
"name": "hostname",
"components": {
"raid_card": {
"serial_number": "55555",
"status": "GREEN",
"product_name": "PRODUCT"
},
"battery": {
"percent_charged": 100,
"health": "HEALTHY",
"status": "GREEN"
},
"accelerator": {
"temperature": 36,
"status": "GREEN"
},
"logical_drives": {
"serial_number": "55555555555",
"health": "HEALTHY",
"status": "GREEN"
}
}
}
}
When you have that in place, you can use code like this to check the status of each component:
# Set up an return variable (exit code)
[int]$exitCode = 0
# $machine is the variable name i used to import the json above
if ($machine.host.status -eq "GREEN") {
Write-Host "Message: Machine $($machine.host.name) - hardware is healthy"
}
else {
foreach ($component in $machine.host.components.PSObject.Properties) {
if ($component.Value.status -ne "GREEN") {
Write-Host "Message: Machine $($machine.host.name) - $($component.Name) is not healthy"
$exitCode = 1
}
}
}
Exit $exitCode
Of course you have different items in there and for now i can see the array of logical drives may be a problem. If you want to have the script tell you which logical drive is causing the status to be "not GREEN" you need to provide an if statement inside the foreach loop to iterate over every drive. | unknown | |
d5927 | train | The answer seems to be that you can provide boost:try_to_lock as a parameter to several of these scoped locks.
e.g.
boost::shared_mutex mutex;
// The reader version
boost::shared_lock<boost::shared_mutex> lock(mutex, boost::try_to_lock);
if (lock){
// We have obtained a shared lock
}
// Writer version
boost::upgrade_lock<boost::shared_mutex> write_lock(mutex, boost::try_to_lock);
if (write_lock){
boost::upgrade_to_unique_lock<boost::shared_mutex> unique_lock(write_lock);
// exclusive access now obtained.
}
EDIT:
I also found by experimentation that upgrade_to_unique_lock will fail if you don't have the upgrade lock. You can also do this:
boost::upgrade_to_unique_lock<boost::shared_mutex> unique_lock(write_lock);
if (unique_lock){
// we are the only thread in here... safe to do stuff to our shared resource
}
// If you need to downgrade then you can also call
unique_lock.release();
// And if you want to release the upgrade lock as well (since only one thread can have upgraded status at a time)
write_lock.unlock().
Note: You have to call release followed by unlock or you'll get an locking exception thrown.
You can of course just let unique_lock and write_lock go out of scope thereby releasing the locks, although I've found that sometimes you want to release it earlier and you should spend minimal time in that state. | unknown | |
d5928 | train | var el=document.getElementById('FOO');
el.innerHTML="<a href='whitehouse.gov'>"+el.textContent+"</a>";
Simply wrap it into a link. Note that html injection is possible. And do not care about performance, were talking about milliseconds...
If you want to prevent html injectin, you may build it up manually:
var el=document.getElementById('FOO');
var a=document.createElement("a");
a.href="whitehouse.hov";
a.textContent=el.textContent;
el.innerHTML="";
el.appendChild(a);
A: You have two possibilities:
Add an <a> element as a child of the <span> element
Replace the text node ("Barack Obama") with an <a> element:
function addAnchor (wrapper, target) {
var a = document.createElement('a');
a.href = target;
a.textContent = wrapper.textContent;
wrapper.replaceChild(a, wrapper.firstChild);
}
addAnchor(document.getElementById('foo'), 'http://www.google.com');
<span id="foo">Barack Obama</span>
This will result in the following DOM structure:
<span id="foo">
<a href="http://www.google.com">Barack Obama</a>
</span>
Replace the <span> element with an <a> element
Replace the entire <span> element with an <a> element:
function addAnchor (wrapper, target) {
var a = document.createElement('a');
a.href = target;
a.textContent = wrapper.textContent;
wrapper.parentNode.replaceChild(a, wrapper);
}
addAnchor(document.getElementById('foo'), 'http://www.google.com');
<span id="foo">Barack Obama</span>
This will result in the following DOM structure:
<a href="http://www.google.com">Barack Obama</a>
Use methods like createElement, appendChild and replaceChild to add element instead of innerHTML.
A:
var changeIntoLink = function(element, href) {
// Create a link wrapper
var link = document.createElement('a');
link.href = href;
// Move all nodes from original element to a link
while (element.childNodes.length) {
var node = element.childNodes[0];
link.appendChild(node);
}
// Insert link into element
element.appendChild(link);
};
var potus = document.getElementById('potus');
changeIntoLink(potus, 'http://google.com/');
<span id="potus">Barack Obama</span> | unknown | |
d5929 | train | So-so, you have:
*
*Platform version is Netweaver 7 (2004s)
*SAP ERP release is 6.0 and it was issued in 2005.
Yes, ECC 6.0 was issued particularly in 2005 and your installation date Aug 29 2006 gives nothing than the installation date.
You have no Enhancement Packs, only 6th Support Pack.
*ABAP version is 7.0 without any EHP.
More on this can be found here.
A: The System Status screens have a little bit evolved since ABAP 7.0. Here they are for ABAP 7.52 SP 0 (SAP_ABA or SAP_BASIS), S/4HANA 1709 On Premise, SAP kernel 7.53 SP 2, HANA 2.0 --Netweaver "version" is meaningless, it's more a marketing name-- :
*
*Menu System > Status:
*Click button Details of Product Version:
*
*First tab "Installed Software Component Versions":
*Second tab "Installed Product Versions":
*Click button Other kernel information: | unknown | |
d5930 | train | Why don't you just do
context.Employees.Include(x => x.Employment)
.Where(x => x.Employments.Any(employment =>
employment.StartDate <= date &&
(employment.EndDate == null || employment.EndDate > date)));
Given that a person can be employed multiple times in the same company.... | unknown | |
d5931 | train | Simply take a parameter with a unique type:
template <class F>
void apply_f(vector<double>& vec, F f) {
transform(vec.begin(), vec.end(), vec.begin(), f);
}
Not only it will work, but you will get way better performance since the compiler knows the actual type being passed.
A: Unfortunately, lambdas are not just pointers to functions (because they can have state, for instance). You can change your code to use a std::function<double(double) instead of a double(*)(double), and this can capture a lambda (you may need to pass std::cref(f) instead of just f). | unknown | |
d5932 | train | Try this below option-
Sales for the Group =
var sales =
CALCULATE(
SUM(Financialcostcenter[amount]),
Financialcostcenter[partnercompany]= "BRE",
Financialcostcenter[2 digits]=71,
DATESYTD('Datas'[Date])
)
+
CALCULATE(
SUM(Financialcostcenter[amount]),
Financialcostcenter[partnercompany]= "GRM",
Financialcostcenter[2 digits]=71,
DATESYTD('Datas'[Date])
)
RETURN IF(sales = BLANK(),"-", -(sales)) | unknown | |
d5933 | train | If WebCacheAttribute is supported only in AspNetCompatibility mode, you may need to declare AspNetCompatibilityRequirementsMode = Required in "AspNetCompatibilityRequirements" attribute and check the service configuration in Web.config to ensure it is enabled:
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>
For more information, please visit:
http://msdn.microsoft.com/en-us/library/aa702682.aspx | unknown | |
d5934 | train | Solved it. I took out the SC argument from the callback function and makeDivsFromTracks(), and now all the players show up. Not sure exactly why this works--maybe it has to do with the SC object being defined in the SDK script reference, so it's globally available and doesn't need to be passed into functions?
Anyways, working code is:
<html>
<head>
<script src="http://connect.soundcloud.com/sdk.js"></script>
<script>
function makeDivsFromTracks(tracks)
{
var track;
var permUrl;
var newDiv;
for(var ctr=0;ctr<tracks.length;ctr++)
{
newDiv=document.createElement("div");
newDiv.id="track"+ctr;
track=tracks[ctr];
//newDiv.innerHTML=track.permalink_url;
SC.oEmbed(track.permalink_url,{color:"ff0066"},newDiv);
document.body.appendChild(newDiv);
}
}
</script>
</head>
<body>
<script>
SC.initialize({
client_id: 'MY_CLIENT_ID'
});
SC.get('/tracks',{duration:{from:180000,to:900000},tags:'hitech',downloadable:true},function
(tracks){makeDivsFromTracks(tracks);});
</script>
</body>
</html> | unknown | |
d5935 | train | Take a look at json_decode
The result of a json_decode is an associative array with the keys and values that were present in your javascript object.
If you don't know how to get the information after you've posted to a PHP script, take a look at the superglobal $_POST. If you're not familiar with that however, I suggest buying a PHP book or read trough some tutorials :)
A: If you want the string index from de JSON send it to a php page
var myObj = {
"username" : theUsername,
"name" : theName,
"surname" : theSurName,
"email" : theEmail,
"password" : thePass,
"confirmpass" : thePass2,
"dob" : theDate,
"gender" : theGender,
"age" : theAge
}
Then in the PHP Page:
extract($_POST);
Then you should see your variables $name, $surname, $email...
Source http://php.net/extract | unknown | |
d5936 | train | Found the problem. the button that call the form has a modal result = mrclose !! | unknown | |
d5937 | train | _build_map() doesn't exist anymore. The following code worked for me
import folium
from IPython.display import display
LDN_COORDINATES = (51.5074, 0.1278)
myMap = folium.Map(location=LDN_COORDINATES, zoom_start=12)
display(myMap)
A: Considering the above answers, another simple way is to use it with Jupiter Notebook.
for example (on the Jupiter notebook):
import folium
london_location = [51.507351, -0.127758]
m = folium.Map(location=london_location, zoom_start=15)
m
and see the result when calling the 'm'.
A: Is there a reason you are using an outdated version of Folium?
This ipython notebook clarifies some of the differences between 1.2 and 2, and it explains how to put folium maps in iframes.
http://nbviewer.jupyter.org/github/bibmartin/folium/blob/issue288/examples/Popups.ipynb
And the code would look something like this (found in the notebook above, it adds a marker, but one could just take it out):
m = folium.Map([43,-100], zoom_start=4)
html="""
<h1> This is a big popup</h1><br>
With a few lines of code...
<p>
<code>
from numpy import *<br>
exp(-2*pi)
</code>
</p>
"""
iframe = folium.element.IFrame(html=html, width=500, height=300)
popup = folium.Popup(iframe, max_width=2650)
folium.Marker([30,-100], popup=popup).add_to(m)
m
The docs are up and running, too, http://folium.readthedocs.io/en/latest/
A: I've found this tutorial on Folium in iPython Notebooks quite helpful. The raw Folium instance that you've created isn't enough to get iPython to display the map- you need to do a bit more work to get some HTML that iPython can render.
To display in the iPython notebook, you need to generate the html with the myMap._build_map() method, and then wrap it in an iFrame with styling for iPython.
import folium
from IPython.display import HTML, display
LDN_COORDINATES = (51.5074, 0.1278)
myMap = folium.Map(location=LDN_COORDINATES, zoom_start=12)
myMap._build_map()
mapWidth, mapHeight = (400,500) # width and height of the displayed iFrame, in pixels
srcdoc = myMap.HTML.replace('"', '"')
embed = HTML('<iframe srcdoc="{}" '
'style="width: {}px; height: {}px; display:block; width: 50%; margin: 0 auto; '
'border: none"></iframe>'.format(srcdoc, width, height))
embed
Where by returning embed as the output of the iPython cell, iPython will automatically call display.display() on the returned iFrame. In this context, you should only need to call display() if you're rendering something else afterwards or using this in a loop or a function.
Also, note that using map as a variable name may might be confused with the .map() method of several classes.
A: You can also save the map as html and then open it with webbrowser.
import folium
import webbrowser
class Map:
def __init__(self, center, zoom_start):
self.center = center
self.zoom_start = zoom_start
def showMap(self):
#Create the map
my_map = folium.Map(location = self.center, zoom_start = self.zoom_start)
#Display the map
my_map.save("map.html")
webbrowser.open("map.html")
#Define coordinates of where we want to center our map
coords = [51.5074, 0.1278]
map = Map(center = coords, zoom_start = 13)
map.showMap()
A: There is no need to use iframes in 2022. To display the map, simply use the
{{ map | safe }} tag in html and _repr_html_() method in you view. It is also not necessary to save the map to the template
sample.py
@app.route('/')
def index():
start_coords = (46.9540700, 142.7360300)
folium_map = folium.Map(location=start_coords, zoom_start=14)
return folium_map._repr_html_()
template.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
{{ folium_map | safe }}
</body>
</html>
A: i have same error and nothing work for me
finally i found it
print(dir(folium.Map))
see method save dose not exist instead use | unknown | |
d5938 | train | I would make sure you're saving it at an adequate resolution. I'm will to bet that "save for web" reduces the resolution to 72 dpi which may not be enough for an android handset. In photoshop, try bumping the resolution of the final png to something like 300 dpi and see if that makes a difference. From there you can experiment with different resolutions to figure out what's the smallest value you can use and still have a crisp image. Alternatively, you could just look for the documented resolution requirements.
A: The apparent quality of your images may also depend in the type of device you are displaying the images on. For example, if your image is saved as 72px x 72px in your image editor, then displayed with a size defined using 72 scaled pixels (sp) in android on a high pixel-density device, then the OS will stretch the image before display. As such, the pixel density of the display device can affect the apparent image quality.
You can provide different resolution images for different pixel densities by using the hdpi, mdpi and ldpi folders for drawables. See these links for more info:
*
*Screens support
*Icon design | unknown | |
d5939 | train | You can use pipe your result to sed:
some_command | sed 's/[[:blank:]]*(/ (/'
Word1 ( 1.22 )
Word2 ( -111.999 )
Word3 ( 123 )
Instead of grep you may consider using awk also:
awk '/Word/{sub(/[[:blank:]]*\(/, " (")} 1' file
A: Simply Pipe your result to tr command.
your_grep_command | tr -s ' '
tr -s ' ' : It will squeeze multiple spaces to one on each line.
Ex:
$ echo "Word1 ( 1.22 )" | tr -s ' '
Word1 ( 1.22 ) | unknown | |
d5940 | train | It's because you passing original list. You're updating values inside adapter and passing not updated list fromadapter, but original. Write method inside adapter to return your updated list.
Inside Custom.java adapter:
public ArrayList<Items> getItems(){
ArrayList<Items> quantityArrayList;
Items item;
for (int i = 0; i < itemsArrayList.size(); i++){
item = itemsArrayList.get(i);
if (item.getQuantity() > 0)
quantityArrayList.add(item);
}
return quantityArrayList;
}
And inside MainActivity onCreate() should look like this. After clicking show button you're going to get Items from Custom Adapter whose quantity >0.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
list_item = (ListView) findViewById(R.id.listdetails);
searchview=(SearchView)findViewById(R.id.searchView);
show = (Button) findViewById(R.id.btnview);
itemsArrayList=new ArrayList<>();
itemsArrayList.add(new Items(1,"Book",20,0,0));
itemsArrayList.add(new Items(2,"Pen",25,0,0));
itemsArrayList.add(new Items(3,"Scale",10,0,0));
itemsArrayList.add(new Items(4,"Eraser",5,0,0));
Custom c = new Custom(this,itemsArrayList);
list_item.setAdapter(c);
list_item.setTextFilterEnabled(true);
setupSearchView();
show.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent intent = new Intent(MainActivity.this, Trial.class);
List<Items> quantityList = c.getItems();
intent.putExtra("data", quantityList);
startActivity(intent);
}
});
} | unknown | |
d5941 | train | :l = 3;
return 3;
}
};
A a;
int main(){
// A::l = 3;
a.foo();
return 0;
}
above code on compiling gives error can someone help to resolve them? when i remove the reads and writes to static thread_local this seems to compile . does it needs some special libraries or linkers to work properly . I need to keep static thread_local to get same features as threadLocal class in java | unknown | |
d5942 | train | Here's a solution that seems to work. I'm using lapply to create the tabs. Let me know if it works for what you need.
library(shiny)
ui <- pageWithSidebar(
headerPanel("xxx"),
sidebarPanel(),
mainPanel(
do.call(tabsetPanel, c(id='tab',lapply(1:5, function(i) {
tabPanel(
title=paste0('tab ', i),
textOutput(paste0('out',i))
)
})))
)
)
server <- function(input, output) {
lapply(1:5, function(j) {
output[[paste0('out',j)]] <- renderPrint({
paste0('generated out ', j)
})
})
}
shinyApp(ui, server) | unknown | |
d5943 | train | A host of possibilities.
Try adding break points at xmppStreamDidConnect and xmppStreamDidAuthenticate.
If xmppStreamDidConnect isn't reached, the connection is not established; you've to rectify your hostName.
If xmppStreamDidAuthenticate isn't reached, the user is not authenticated; you've to rectify your credentials i.e. username and/or password.
One common mistake is omitting of @domainname at the back of username i.e. username@domainname e.g. keithoys@openfireserver where domain name is openfireserver.
A: Hope this still relevant, if not, hopefully it will help others.
There are some issues with your code:
*
*I don't see the call to connect, you should add something like this:
NSError *error = nil;
if (![_xmppStream connectWithTimeout:XMPPStreamTimeoutNone error:&error]) {
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:@"Error connecting"
message:@"Msg"
delegate:nil
cancelButtonTitle:@"Ok"
otherButtonTitles:nil];
[alertView show];
}
*Most of the XMPP API is asynchronous.
You have to set the stream delegate in order to receive events.
Check out XMPPStreamDelegate and XMPPStream#addDelegate
If you don't want to go through the code yourself XMPPStream.h
, you can implement all methods of XMPPStreamDelegate and log the events. This will help you understand how the framework works.
Hope this helps, Yaron | unknown | |
d5944 | train | This is not a c related. It looks like c++ in which case the ~ is the desctructor for the class. You might want to read about destructors in the C++ FAQ
A: This is a C++ class, not an Objective-C class. The ~ symbol is used to declare or define a destructor method, a method that is automatically called when an instance's lifetime ends. The destructor method of a C++ class is used in the same way that a dealloc method is used in Objective-C classes (to clean up resources, and so on). The difference is that in Objective-C, dealloc is not invoked until it has no owners (i.e. all owners have relinquished their ownership by sending release).
If you wish to know what this code does, perhaps ask a C++ crowd, although, from a quick glance, it looks like an audio player utilising Apple's AudioToolbox framework.
A: That looks like C++ to me. It is supported on iOS, so that's why you’re seeing it.
A: That is c++ code, to develop with these from Objective-C you'd use Objective-C++. Files with .mm are Obj-c++ and .m are just standard obj-c.
A: thats the destructor. in C++ the destructors are as this, while this code looks more like C++, in objective C you can write the destructor with "~" | unknown | |
d5945 | train | Updated answer.
After reading a few documents about TARGA format. I've revised + simplified a C program to convert.
// tga2img.c
#include <stdio.h>
#include <stdlib.h>
#include <wand/MagickWand.h>
typedef struct {
unsigned char idlength;
unsigned char colourmaptype;
unsigned char datatypecode;
short int colourmaporigin;
short int colourmaplength;
unsigned char colourmapdepth;
short int x_origin;
short int y_origin;
short int width;
short int height;
unsigned char bitsperpixel;
unsigned char imagedescriptor;
} HEADER;
typedef struct {
int extensionoffset;
int developeroffset;
char signature[16];
unsigned char p;
unsigned char n;
} FOOTER;
int main(int argc, const char * argv[]) {
HEADER tga_header;
FOOTER tga_footer;
FILE
* fd;
size_t
tga_data_size,
tga_pixel_size,
i,
j;
unsigned char
* tga_data,
* buffer;
const char
* input,
* output;
if (argc != 3) {
printf("Usage:\n\t %s <input> <output>\n", argv[0]);
return 1;
}
input = argv[1];
output = argv[2];
fd = fopen(input, "rb");
if (fd == NULL) {
fprintf(stderr, "Unable to read TGA input\n");
return 1;
}
/********\
* TARGA *
\*********/
#pragma mark TARGA
// Read TGA header
fread(&tga_header.idlength, sizeof(unsigned char), 1, fd);
fread(&tga_header.colourmaptype, sizeof(unsigned char), 1, fd);
fread(&tga_header.datatypecode, sizeof(unsigned char), 1, fd);
fread(&tga_header.colourmaporigin, sizeof( short int), 1, fd);
fread(&tga_header.colourmaplength, sizeof( short int), 1, fd);
fread(&tga_header.colourmapdepth, sizeof(unsigned char), 1, fd);
fread(&tga_header.x_origin, sizeof( short int), 1, fd);
fread(&tga_header.y_origin, sizeof( short int), 1, fd);
fread(&tga_header.width, sizeof( short int), 1, fd);
fread(&tga_header.height, sizeof( short int), 1, fd);
fread(&tga_header.bitsperpixel, sizeof(unsigned char), 1, fd);
fread(&tga_header.imagedescriptor, sizeof(unsigned char), 1, fd);
// Calculate sizes
tga_pixel_size = tga_header.bitsperpixel / 8;
tga_data_size = tga_header.width * tga_header.height * tga_pixel_size;
// Read image data
tga_data = malloc(tga_data_size);
fread(tga_data, 1, tga_data_size, fd);
// Read TGA footer.
fseek(fd, -26, SEEK_END);
fread(&tga_footer.extensionoffset, sizeof( int), 1, fd);
fread(&tga_footer.developeroffset, sizeof( int), 1, fd);
fread(&tga_footer.signature, sizeof( char), 16, fd);
fread(&tga_footer.p, sizeof(unsigned char), 1, fd);
fread(&tga_footer.n, sizeof(unsigned char), 1, fd);
fclose(fd);
buffer = malloc(tga_header.width * tga_header.height * 4);
#pragma mark RGBA4444 to RGBA8888
for (i = 0, j=0; i < tga_data_size; i+= tga_pixel_size) {
buffer[j++] = (tga_data[i+1] & 0x0f) << 4; // Red
buffer[j++] = tga_data[i ] & 0xf0; // Green
buffer[j++] = (tga_data[i ] & 0x0f) << 4; // Blue
buffer[j++] = tga_data[i+1] & 0xf0; // Alpha
}
free(tga_data);
/***************\
* IMAGEMAGICK *
\***************/
#pragma mark IMAGEMAGICK
MagickWandGenesis();
PixelWand * background;
background = NewPixelWand();
PixelSetColor(background, "none");
MagickWand * wand;
wand = NewMagickWand();
MagickNewImage(wand,
tga_header.width,
tga_header.height,
background);
background = DestroyPixelWand(background);
MagickImportImagePixels(wand,
0,
0,
tga_header.width,
tga_header.height,
"RGBA",
CharPixel,
buffer);
free(buffer);
MagickWriteImage(wand, argv[2]);
wand = DestroyMagickWand(wand);
return 0;
}
Which can be compiled with clang $(MagickWand-config --cflags --libs) -o tga2im tga2im.c, and can be executed simply by ./tga2im N_birthday_0000.tga N_birthday_0000.tga.png.
Original answer.
The only way I can think of converting the images is to author a quick program/script to do the bitwise color-pixel logic.
This answer offers a quick way to read the image data; so combining with MagickWand, can be converted easily. (Although I know there'll be better solutions found on old game-dev forums...)
#include <stdio.h>
#include <stdbool.h>
#include <wand/MagickWand.h>
typedef struct
{
unsigned char imageTypeCode;
short int imageWidth;
short int imageHeight;
unsigned char bitCount;
unsigned char *imageData;
} TGAFILE;
bool LoadTGAFile(const char *filename, TGAFILE *tgaFile);
int main(int argc, const char * argv[]) {
const char
* input,
* output;
if (argc != 3) {
printf("Usage:\n\t%s <input> <output>\n", argv[0]);
}
input = argv[1];
output = argv[2];
MagickWandGenesis();
TGAFILE header;
if (LoadTGAFile(input, &header) == true) {
// Build a blank canvas image matching TGA file.
MagickWand * wand;
wand = NewMagickWand();
PixelWand * background;
background = NewPixelWand();
PixelSetColor(background, "NONE");
MagickNewImage(wand, header.imageWidth, header.imageHeight, background);
background = DestroyPixelWand(background);
// Allocate RGBA8888 buffer
unsigned char * buffer = malloc(header.imageWidth * header.imageHeight * 4);
// Iterate over TGA image data, and convert RGBA4444 to RGBA8888;
size_t pixel_size = header.bitCount / 8;
size_t total_bytes = header.imageWidth * header.imageHeight * pixel_size;
for (int i = 0, j = 0; i < total_bytes; i+=pixel_size) {
// Red
buffer[j++] = (header.imageData[i ] & 0x0f) << 4;
// Green
buffer[j++] = (header.imageData[i ] & 0xf0);
// Blue
buffer[j++] = (header.imageData[i+1] & 0xf0) << 4;
// Alpha
buffer[j++] = (header.imageData[i+1] & 0xf0);
}
// Import image data over blank canvas
MagickImportImagePixels(wand, 0, 0, header.imageWidth, header.imageHeight, "RGBA", CharPixel, buffer);
// Write image
MagickWriteImage(wand, output);
wand = DestroyMagickWand(wand);
} else {
fprintf(stderr, "Could not read TGA file %s\n", input);
}
MagickWandTerminus();
return 0;
}
/*
* Method copied verbatim from https://stackoverflow.com/a/7050007/438117
* Show your love by +1 to Wroclai answer.
*/
bool LoadTGAFile(const char *filename, TGAFILE *tgaFile)
{
FILE *filePtr;
unsigned char ucharBad;
short int sintBad;
long imageSize;
int colorMode;
unsigned char colorSwap;
// Open the TGA file.
filePtr = fopen(filename, "rb");
if (filePtr == NULL)
{
return false;
}
// Read the two first bytes we don't need.
fread(&ucharBad, sizeof(unsigned char), 1, filePtr);
fread(&ucharBad, sizeof(unsigned char), 1, filePtr);
// Which type of image gets stored in imageTypeCode.
fread(&tgaFile->imageTypeCode, sizeof(unsigned char), 1, filePtr);
// For our purposes, the type code should be 2 (uncompressed RGB image)
// or 3 (uncompressed black-and-white images).
if (tgaFile->imageTypeCode != 2 && tgaFile->imageTypeCode != 3)
{
fclose(filePtr);
return false;
}
// Read 13 bytes of data we don't need.
fread(&sintBad, sizeof(short int), 1, filePtr);
fread(&sintBad, sizeof(short int), 1, filePtr);
fread(&ucharBad, sizeof(unsigned char), 1, filePtr);
fread(&sintBad, sizeof(short int), 1, filePtr);
fread(&sintBad, sizeof(short int), 1, filePtr);
// Read the image's width and height.
fread(&tgaFile->imageWidth, sizeof(short int), 1, filePtr);
fread(&tgaFile->imageHeight, sizeof(short int), 1, filePtr);
// Read the bit depth.
fread(&tgaFile->bitCount, sizeof(unsigned char), 1, filePtr);
// Read one byte of data we don't need.
fread(&ucharBad, sizeof(unsigned char), 1, filePtr);
// Color mode -> 3 = BGR, 4 = BGRA.
colorMode = tgaFile->bitCount / 8;
imageSize = tgaFile->imageWidth * tgaFile->imageHeight * colorMode;
// Allocate memory for the image data.
tgaFile->imageData = (unsigned char*)malloc(sizeof(unsigned char)*imageSize);
// Read the image data.
fread(tgaFile->imageData, sizeof(unsigned char), imageSize, filePtr);
// Change from BGR to RGB so OpenGL can read the image data.
for (int imageIdx = 0; imageIdx < imageSize; imageIdx += colorMode)
{
colorSwap = tgaFile->imageData[imageIdx];
tgaFile->imageData[imageIdx] = tgaFile->imageData[imageIdx + 2];
tgaFile->imageData[imageIdx + 2] = colorSwap;
}
fclose(filePtr);
return true;
}
The order of the color channels may need to be switch around.
A: Oh, I see Eric beat me to it:-)
Hey ho! I did it a different way anyway and got a different answer so you can see which one you like best. I also wrote some C but I didn't rely on any libraries, I just read the TGA and converted it to a PAM format and let ImageMagick make that into PNG afterwards at command-line.
I chose PAM because it is the simplest file to write which supports transparency - see Wikipedia on PAM format.
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
int main(int argc,char* argv[]){
unsigned char buf[64];
FILE* fp=fopen(argv[1],"rb");
if(fp==NULL){
fprintf(stderr,"ERROR: Unable to open %s\n",argv[1]);
exit(1);
}
// Read TGA header of 18 bytes, extract width and height
fread(buf,1,18,fp); // 12 bytes junk, 2 bytes width, 2 bytes height, 2 bytes junk
unsigned short w=buf[12]|(buf[13]<<8);
unsigned short h=buf[14]|(buf[15]<<8);
// Write PAM header
fprintf(stdout,"P7\n");
fprintf(stdout,"WIDTH %d\n",w);
fprintf(stdout,"HEIGHT %d\n",h);
fprintf(stdout,"DEPTH 4\n");
fprintf(stdout,"MAXVAL 255\n");
fprintf(stdout,"TUPLTYPE RGB_ALPHA\n");
fprintf(stdout,"ENDHDR\n");
// Read 2 bytes at a time RGBA4444
while(fread(buf,2,1,fp)==1){
unsigned char out[4];
out[0]=(buf[1]&0x0f)<<4;
out[1]=buf[0]&0xf0;
out[2]=(buf[0]&0x0f)<<4;
out[3]=buf[1]&0xf0;
// Write the 4 modified bytes out RGBA8888
fwrite(out,4,1,stdout);
}
fclose(fp);
return 0;
}
I the compile that with gcc:
gcc targa.c -o targa
Or you could use clang:
clang targa.c -o targa
and run it with
./targa someImage.tga > someImage.pam
and convert the PAM to PNG with ImageMagick at the command-line:
convert someImage.pam someImage.png
If you want to avoid writing the intermediate PAM file to disk, you can pipe it straight into convert like this:
./targa illu_evolution_01.tga | convert - result.png
You can, equally, make a BMP output file if you wish:
./targa illu_evolution_01.tga | convert - result.bmp
If you have thousands of files to do, and you are on a Mac or Linux, you can use GNU Parallel and get them all done in parallel much faster like this:
parallel --eta './targa {} | convert - {.}.png' ::: *.tga
If you have more than a couple of thousand files, you may get "Argument list too long" errors, in which case, use the slightly harder syntax:
find . -name \*tga -print0 | parallel -0 --eta './targa {} | convert - {.}.png'
On a Mac, you would install GNU Parallel with homebrew using:
brew install parallel
For your RGBA5650 images, I will fall back to PPM as my intermediate format because the alpha channel of PAM is no longer needed. The code will now look like this:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
int main(int argc,char* argv[]){
unsigned char buf[64];
FILE* fp=fopen(argv[1],"rb");
if(fp==NULL){
fprintf(stderr,"ERROR: Unable to open %s\n",argv[1]);
exit(1);
}
// Read TGA header of 18 bytes, extract width and height
fread(buf,1,18,fp); // 12 bytes junk, 2 bytes width, 2 bytes height, 2 bytes junk
unsigned short w=buf[12]|(buf[13]<<8);
unsigned short h=buf[14]|(buf[15]<<8);
// Write PPM header
fprintf(stdout,"P6\n");
fprintf(stdout,"%d %d\n",w,h);
fprintf(stdout,"255\n");
// Read 2 bytes at a time RGBA5650
while(fread(buf,2,1,fp)==1){
unsigned char out[3];
out[0]=buf[1]&0xf8;
out[1]=((buf[1]&7)<<5) | ((buf[0]>>3)&0x1c);
out[2]=(buf[0]&0x1f)<<3;
// Write the 3 modified bytes out RGB888
fwrite(out,3,1,stdout);
}
fclose(fp);
return 0;
}
And will compile and run exactly the same way.
A: I have been thinking about this some more and it ought to be possible to reconstruct the image without any special software - I can't quite see my mistake for the moment by maybe @emcconville can cast your expert eye over it and point out my mistake! Pretty please?
So, my concept is that ImageMagick has read in the image size and pixel data correctly but has just allocated the bits according to the standard RGB5551 interpretation of a TARGA file rather than RGBA4444. So, we rebuild the 16-bits of data it read and split them differently.
The first line below does the rebuild into the original 16-bit data, then each subsequent line splits out one of the RGBA channels and then we recombine them:
convert illu_evolution_01.tga -depth 16 -channel R -fx "(((r*255)<<10) | ((g*255)<<5) | (b*255) | ((a*255)<<15))/255" \
\( -clone 0 -channel R -fx "((((r*255)>>12)&15)<<4)/255" \) \
\( -clone 0 -channel R -fx "((((r*255)>>8 )&15)<<4)/255" \) \
\( -clone 0 -channel R -fx "((((r*255) )&15)<<4)/255" \) \
-delete 0 -set colorspace RGB -combine -colorspace sRGB result.png
# The rest is just debug so you can see the reconstructed channels in [rgba].png
convert result.png -channel R -separate r.png
convert result.png -channel G -separate g.png
convert result.png -channel B -separate b.png
convert result.png -channel A -separate a.png
So, the following diagram represents the 16-bits of 1 pixel:
A R R R R R G G G G G B B B B B <--- what IM saw
R R R R G G G G B B B B A A A A <--- what it really meant
Yes, I have disregarded the alpha channel for the moment. | unknown | |
d5946 | train | If the order / items are static you can store the links as strings in an array and then access the array to get the corresponding string and navigate to it using an Intent.
Here is an example of an intent to a web address
String url = "http://www.youtube.com";
Intent i = new Intent(Intent.ACTION_VIEW);
i.setData(Uri.parse(url));
startActivity(i);
A: If I understand correctly, you should use a onItemClickListener!
http://developer.android.com/reference/android/widget/AdapterView.OnItemClickListener.html
Then use this code in it!
String url = "http://www.example.com";
Intent i = new Intent(Intent.ACTION_VIEW);
i.setData(Uri.parse(url));
startActivity(i); | unknown | |
d5947 | train | You need to add a unique key prop to your React element.
According to the React docs:
Keys help React identify which items have changed, are added, or are
removed. Keys should be given to the elements inside the array to give
the elements a stable identity.
The best way to pick a key is to use a string that uniquely identifies
a list item among its siblings. Most often you would use IDs from your
data as keys
When you don’t have stable IDs for rendered items, you may use the
item index as a key as a last resort
You can do it like
for (var fieldIn in fieldsIn) { // array of FORM ELEMENT descriptions in JSON
console.log(fieldIn);
let field = React.createElement(SmartRender, // go build the React Element
{key: fieldsIn[fieldIn].key, fieldIn},
null); // lowest level, no children, data is in props
console.log('doin fields inside');
fieldsOut.push(field);
}
Why are keys necessary?
By default, when recursing on the children of a DOM node, React just iterates over both lists of children at the same time and generates a mutation whenever there’s a difference.
For example, when adding an element at the end of the children, converting between these two trees works well:
<ul>
<li>first</li>
<li>second</li>
</ul>
<ul>
<li>first</li>
<li>second</li>
<li>third</li>
</ul>
React will match the two <li>first</li> trees, match the two <li>second</li> trees, and then insert the <li>third</li> tree.
If you implement it naively, inserting an element at the beginning has worse performance. For example, converting between these two trees works poorly.
<ul>
<li>first</li>
<li>second</li>
</ul>
<ul>
<li>third</li>
<li>first</li>
<li>second</li>
</ul>
That is where keys come in handy. | unknown | |
d5948 | train | I know you would prefer not to calculate the mid points by hand, however, it is often easier to work with variables inside the aesthetics then with statistics, so I did it calculating the midpoints before hand and mapping to the axis
library(ggplot2)
library(directlabels) # provides a geom_dl that works easier with labels
foo <- data.frame(x=runif(50),y=runif(50))
bar <- data.frame(x1=c(0.2,0),x2=c(0.7,0.2),
y1=c(0.1,0.9),y2=c(0.6,0.5),
midx = c(0.45, 0.1), # x mid points
midy = c(0.35, 0.7), # y midpoints
lbl=c("Arrow 1", "Arrow 2"))
p1 <- ggplot(data=foo,aes(x=x,y=y))
p1 <- p1 + geom_point(color="grey")
p1 <- p1 + geom_segment(data=bar,aes(x=x1, xend=x2, y=y1, yend=y2),
size = 0.75,arrow = arrow(length = unit(0.5, "cm")))
p1 + geom_dl(data = bar, aes(x = midx, y = midy, label = lbl),
method = list(dl.trans(x = unit(x, 'cm'), y = unit(y, 'cm'))))
A: Here are two ways to do it with a lot of tidying. You don't need to do anything by hand if you think about the fact that the midpoint has coordinates that are just the means of x values and the means of y values of the 2 endpoints. First way is to tidy your data frame, calculate the midpoints, then make it wide again to have x and y columns. That data frame goes into ggplot, so it's passed through all the geoms, but we override with data arguments to geom_point and geom_segment. geom_segment gets just the original copy of the bar data frame.
library(tidyverse)
foo <- data.frame(x=runif(50),y=runif(50))
bar <- data.frame(x1=c(0.2,0),x2=c(0.7,0.2),
y1=c(0.1,0.9),y2=c(0.6,0.5),
lbl=c("Arrow 1", "Arrow 2"))
bar %>%
gather(key = coord, value = value, -lbl) %>%
mutate(coord = str_sub(coord, 1, 1)) %>%
group_by(lbl, coord) %>%
summarise(value = mean(value)) %>%
ungroup() %>%
spread(key = coord, value = value) %>%
ggplot() +
geom_point(aes(x = x, y = y), data = foo, color = "grey") +
geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2), data = bar, size = 0.75, arrow = arrow(length = unit(0.5, "cm"))) +
geom_text(aes(x = x, y = y, label = lbl))
But maybe you don't want to do all that piping at the beginning, or you have to do this several times, so you want a function to calculate the midpoints. For the second version, I wrote a function that does basically what was piped into ggplot in the first version. You supply it with the bare column name where your labels are kept, which is the column it will be grouped on. Then you can just use that in your geom_text.
## cool function!
tidy_midpt <- function(df, lbl_col) {
lbl_quo <- enquo(lbl_col)
df %>%
gather(key = coord, value = value, -!!lbl_quo) %>%
mutate(coord = str_sub(coord, 1, 1)) %>%
group_by(lbl, coord) %>%
summarise(value = mean(value)) %>%
ungroup() %>%
spread(key = coord, value = value)
}
ggplot(data = bar) +
geom_point(aes(x = x, y = y), data = foo, color = "grey") +
geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2), size = 0.75, arrow = arrow(length = unit(0.5, "cm"))) +
geom_text(aes(x = x, y = y, label = lbl), data = . %>% tidy_midpt(lbl))
Created on 2018-05-03 by the reprex package (v0.2.0). | unknown | |
d5949 | train | subprocess.call(['sed', '-e', 's/\"absolute\/path\/to\/your\/lib\/\"\/var\/www\/twiki\/lib\/', '\/var\/www\/twiki\/lib\/LocalLib.cfg'])
looks absolutely creepy.
First thing: why did you escape the /s on the file name argument? That is only necessary in the s command.
Second thing: If I replace your separator character from / to e.g. #, I can omit all the unnecessary escaping.
I did both and then got
subprocess.call(['sed', '-e', 's#"absolute/path/to/your/lib/"/var/www/twiki/lib/', '/var/www/twiki/lib/LocalLib.cfg'])
and what do I see? There are no # (i.e., no unescaped /) in the command.
Try
's#"absolute/path/to/your/lib/"#/var/www/twiki/lib/#'
here, or if you insist on using /, do
's/"absolute\/path\/to\/your\/lib\/"/\/var\/www\/twiki\/lib\//'
^ ^
with /s added on the ^ marked places.
Edit: I changed the " positions in order to reflect the clearance of my misunderstanding. See the comments below.
A: Another thing you can do is use a raw string notation, notice the "r" before the string in example below.
import subprocess
COMMAND = r"""
mysql -u root -h localhost -p --exec='use test; select 1, 2, 3 | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > sample.csv
"""
proc = subprocess.Popen(COMMAND, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
std_out, std_err = proc.communicate()
Though this works, I did this only because there were loads of these commands in the bash that I wanted to wrap it with python. I would prefer using mysql directly from python and using the csv module. | unknown | |
d5950 | train | One approach could be like
df1 = list(ABCC10 = c("TCGA_DD_A1EG", "TCGA_FV_A3R2", "TCGA_FV_A3I0", "TCGA_DD_A1EH", "TCGA_FV_A23B"),
ACBD6 = c("TCGA_DD_A1EH", "TCGA_DD_A3A8", "TCGA_ES_A2HT", "TCGA_DD_A1EG", "TCGA_DD_A1EB"))
df2 = data.frame(TCGA.BC.A10Q = c(2.540764, 1.112432),
TCGA.DD.A1EB = c(0.4372165, 0.4611697),
TCGA.DD.A1EG = c(2.193205, 1.274129),
TCGA.DD.A1EH = c(3.265756, 1.802985),
TCGA.DD.A1EI = c(0.6060301, -0.0475743),
TCGA.DD.A3A6 = c(2.927072, 1.071064),
TCGA.DD.A3A8 = c(0.6799514, 0.4336301),
TCGA.ES.A2HT = c(-0.08129554, 1.76935812),
TCGA.FV.A23B = c(2.2963764, 0.3644397),
TCGA.FV.A3I0 = c(3.196518, 1.392206),
TCGA.FV.A3R2 = c(0.8595943, 1.0282030),
row.names = c('ABCC10', 'ACBD6'))
for(i in 1:length(df1)){
for(j in 1:length(df1[[1]])){
df1[names(df1)[i]][[1]][j] = df2[names(df1)[i],gsub("_",".",df1[names(df1)[i]][[1]][j])]
}
}
Output is:
$ABCC10
[1] "2.193205" "0.8595943" "3.196518" "3.265756" "2.2963764"
$ACBD6
[1] "1.802985" "0.4336301" "1.76935812" "1.274129" "0.4611697"
Hope this helps!
A: Maybe the following will do it.
First, make up some data, a list and a data.frame.
df1 <- list(A = letters[1:3], B = letters[5:7])
df2 <- data.frame(a = rnorm(2), b = rnorm(2), c = rnorm(2),
e = rnorm(2), f = rnorm(2), g = rnorm(2))
row.names(df2) <- c('A', 'B')
Now the code.
for(i in seq_along(df1)){
x <- gsub("_", ".", df1[[i]])
inx <- match(x, names(df2))
df1[[i]] <- df2[i, inx]
}
df1
In my tests it did what you want. If it doesn't fit your real problem, just say so. | unknown | |
d5951 | train | You have to handle the DbNull case explicitly, for example:
<%= DbNull.Equals(DBRSet["price"]) ? "null" : Math.Round(DBRSet["price"]).ToString() %>
This is unwieldy, so it makes sense to have a helper method something like this somewhere:
static class FormatDbValue {
public static string Money(object value)
{
if (DbNull.Equals(value)) {
return "0";
}
return Math.Round((decimal)value);
}
}
Which would allow
<%= FormatDbValue.Money(DBRSet["price"]) %>
Of course finding and changing all such code to use the helper method would be... unpleasant. I would do it by searching throughout the project (maybe in smaller chunks of the project) for something indicative (maybe Math.Round?) and review it manually before replacing.
A: If you don't want to change the aspx's, but can easily change the definition of the DBRSet property, you can put there a wrapper over the SqlDataReader and implement your own indexer that would first check for null and go into the inner dataReader to get the value if not null. | unknown | |
d5952 | train | I believe you are missing a # in the fillRadialGradientColorStops array
0f1114 --> #0f1114 | unknown | |
d5953 | train | Use this syntax to remove the original binding by the Datepicker:
$("#txtStartDate").unbind('change').change(function () {
// your code
}); | unknown | |
d5954 | train | What you want is the encoding where Unicode code point X is encoded to the same byte value X. For code points inside 0-255 you have this in the latin-1 encoding:
def double_decode(bstr):
return bstr.decode("utf-8").encode("latin-1").decode("utf-8")
A: ret.decode() tries implicitly to encode ret with the system encoding - in your case ascii.
If you explicitly encode the unicode string, you should be fine. There is a builtin encoding that does what you need:
>>> 'X\xc3\xbcY\xc3\x9f'.encode('raw_unicode_escape').decode('utf-8')
'XüYß'
Really, .encode('latin1') (or cp1252) would be OK, because that's what the server is almost cerainly using. The raw_unicode_escape codec will just give you something recognizable at the end instead of raising an exception:
>>> '€\xe2\x82\xac'.encode('raw_unicode_escape').decode('utf8')
'\\u20ac€'
>>> '€\xe2\x82\xac'.encode('latin1').decode('utf8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'latin-1' codec can't encode character '\u20ac' in position 0: ordinal not in range(256)
In case you run into this sort of mixed data, you can use the codec again, to normalize everything:
>>> '€\xe2\x82\xac'.encode('raw_unicode_escape').decode('utf8')
'\\u20ac€'
>>> '\\u20ac€'.encode('raw_unicode_escape')
b'\\u20ac\\u20ac'
>>> '\\u20ac€'.encode('raw_unicode_escape').decode('raw_unicode_escape')
'€€'
A: Don't use this! Use @hop's solution.
My nasty hack: (cringe! but quietly. It's not my fault, it's the server developers' fault)
def double_decode_unicode(s, encoding='utf-8'):
return ''.join(chr(ord(c)) for c in s.decode(encoding)).decode(encoding)
Then,
>>> double_decode_unicode('X\xc3\x83\xc2\xbcY\xc3\x83\xc2\x9f')
u'X\xfcY\xdf'
>>> print _
XüYß
A: Here's a little script that might help you, doubledecode.py --
https://gist.github.com/1282752 | unknown | |
d5955 | train | The form is "generated" once you execute ->getForm(); so if you want to add anyfield before generating it, you should finish by ->getForm();
So your code should probably look like:
// add you "static" fields
$formBuilder = $app['form.factory']->createBuilder(FormType::class)
->add('name', TextType::class, array(
'constraints' => array(new Assert\NotBlank(), new Assert\Length(array('min' => 4,'max' => 64))),
'label' => 'Team Name',
'required' => 'required',
'attr' => array('class' => 'input-field', 'autocomplete' => 'off', 'value' => $team->data()->name),
'label_attr' => array('class' => 'label')
))
->add('players', CheckboxType::class, [
'constraints' => array(new Assert\NotBlank()),
'label' => $player->username,
'attr' => array('class' => 'input-field', 'value' => $player->username),
'label_attr' => array('class' => 'label')
])
->add('submit', SubmitType::class, [
'label' => 'Edit',
'attr' => array('class' => 'submit'),
]);
$user = new User();
$user->getList();
// then add your "dynamic" fields
foreach($user->data() as $player) {
$formBuilder->add('players', CheckboxType::class, [
'constraints' => array(new Assert\NotBlank()),
'label' => $player->username,
'attr' => array('class' => 'input-field', 'value' => $player->username),
'label_attr' => array('class' => 'label')
]);
}
// then generate your form
$form = $formBuilder->getForm(); | unknown | |
d5956 | train | Add a return to the __str__ method.
UPDATE:
I ran your updated code on my machine, and it works fine:
aj@localhost:~/so/python# cat date2.py
from datetime import date
class Year(date):
def __new__(cls, year):
return super(Year, cls).__new__(cls, year, 1, 1)
def __str__(self):
return self.strftime('%Y')
y=Year(2011)
print str(y)
aj@localhost:~/so/python# python date2.py
2011
A: If this is your complete code you are missing the return statement:
def __str__(self):
return self.strftime('%Y') | unknown | |
d5957 | train | this
String ba1=Base64.encodeToString(ba, f);
is very heavy. I recommend using a http://developer.android.com/reference/android/util/Base64OutputStream.html instead, write to a file, then use an InputStream in the HttpEntity. | unknown | |
d5958 | train | <?
$keys = array('m1' => 1, -500 => 1, 0 => 1, 1000 => 1, 'm2' => 1, 5000 => 1, );
ksort($keys, SORT_STRING);
foreach($keys as $k => $v){
echo $k . '<br />';
}
?>
Will return:
-500
0
1000
5000
m1
m2
Make sure to keep all the string keys lowercase if you want them in the right order too. This will put the strings after all integers.
Heres an example of the method: http://codepad.org/IBc3wnso
The only way I can think of to simply get your non int keys first, is to prefix them with --:
<?
$keys = array('--m2' => 1, -500 => 1, 0 => 1, 1000 => 1, '--m1' => 1, 5000 => 1, );
ksort($keys, SORT_STRING);
foreach($keys as $k => $v){
echo $k . "\n";
}
?>
Will return:
--m1
--m2
-500
0
1000
5000
Example: http://codepad.org/rwbrj3rJ
It's a bit of a hack though. There's probably a better way to accomplish that.
A: If you want single chars as array keys, try chr(0) and chr(255).
Wait a minute: if you keep changing the question it's difficult to reply.
You have -500 as a key: this is not a single char.
Then, use -PHP_INT_MAX for lower value and PHP_INT_MAX for upper value. | unknown | |
d5959 | train | I wouldn't manually rely on that mechanism per say as you may want to get more metrics out of the cluster, for which purpose you have native JMX support, so through the JMX protocol you can look at metrics in more detail.
Now obviously you have OpsCenter which natively leverages this feature, but alternatively you can use a combination of a JMX listener with something like Graphana(just a thought) or whatever supports native compatibility.
In terms of low level methods, yes, you are on the money:
connector.provider.session.isClosed()
But you also have heartbeats that you can log and look at and so on. There's more detail here. | unknown | |
d5960 | train | InStr returns positional information. While it is difficult to find the first occurrence of an array member within the text (you would need to build and compare matches), you can find the first position of each name then find which came first.
For example (untested)
Sub CountOccurences_SpecificText_In_Folder()
Dim MailItem As Outlook.MailItem
Dim i As Long, x As Long, position As Long, First As Long
Dim AgentNames() As String
AgentNames = Split("Simons,Skinner,Mammedaty,Hunter,Sunmola,Rodriguez,Mitchell,Tanner,Taylor,Wilson,Williams,Groover,Tyree,Chapman,Luker", ",")
Dim AgentCount(LBound(AgentNames) To UBound(AgentNames)) As Long
For i = LBound(AgentCount) To UBound(AgentCount)
AgentCount(i) = 0
Next i
For Each MailItem In ActiveExplorer.Selection
x = 0
For i = LBound(AgentNames) To UBound(AgentNames)
position = InStr(MailItem.Body, AgentNames(i))
If x > 0 Then
If position < x Then
x = position
First = i
End If
Else
If position > 0 Then
x = position
First = i
End If
End If
Next i
AgentCount(First) = AgentCount(First) + 1
Next MailItem
For i = LBound(AgentNames) To UBound(AgentNames)
Debug.Print AgentNames(i) & " Count: " & AgentCount(i)
Next i
End Sub
A: The idea in the previous answer may be better implemented like this:
Option Explicit
Sub CountOccurences_SpecificText_SelectedItems()
Dim objItem As Object
Dim objMail As MailItem
Dim i As Long
Dim j As Long
Dim x As Long
Dim position As Long
Dim First As Long
Dim AgentNames() As String
AgentNames = Split("Simons,Skinner,Mammedaty,Hunter,Sunmola,Rodriguez,Mitchell,Tanner,Taylor,Wilson,Williams,Groover,Tyree,Chapman,Luker", ",")
ReDim AgentCount(LBound(AgentNames) To UBound(AgentNames)) As Long
For j = 1 To ActiveExplorer.Selection.Count
Set objItem = ActiveExplorer.Selection(j)
' Verify before attempting to return mailitem poroperties
If TypeOf objItem Is MailItem Then
Set objMail = objItem
Debug.Print
Debug.Print "objMail.Subject: " & objMail.Subject
x = Len(objMail.Body)
For i = LBound(AgentNames) To UBound(AgentNames)
Debug.Print
Debug.Print "AgentNames(i): " & AgentNames(i)
position = InStr(objMail.Body, AgentNames(i))
Debug.Print " position: " & position
If position > 0 Then
If position < x Then
x = position
First = i
End If
End If
Debug.Print "Lowest position: " & x
Debug.Print " Current first: " & AgentNames(First)
Next i
If x < Len(objMail.Body) Then
AgentCount(First) = AgentCount(First) + 1
Debug.Print
Debug.Print AgentNames(First) & " was found first"
Else
Debug.Print "No agent found."
End If
End If
Next
For i = LBound(AgentNames) To UBound(AgentNames)
Debug.Print AgentNames(i) & " Count: " & AgentCount(i)
Next i
End Sub | unknown | |
d5961 | train | Hash table operations are very efficient, and if you're getting a lot of errors due to duplicate adds you might be better off eliminating the error handling. If you sort the priorities in descending order then you can do this:
$userProfileHash[$_.samaccountname] = $group.profile
and eliminate the Try/Catch. Duplicate memberships will just get overwritten and the last entry written for each user will be the highest priority group profile they belong to.
Edit: the original command I posted used the += operator. That's not correct for this application, and I've corrected the code. | unknown | |
d5962 | train | You can use the setGraphic method to change the appearance of the Node inside your Button.
Here's a documentation with an example about how to do it: Using JavaFX UI Controls - Button.
You can then apply CSS to that custom Node of yours.
Example:
Button button = new Button();
Label label = new Label("Click Me!");
label.setStyle("-fx-effect: dropshadow( one-pass-box , black , 8 , 0.0 , 2 , 0 )");
button.setGraphic(label); | unknown | |
d5963 | train | Try this
$datas = $request->all();
$records = [];
foreach ($datas as $key => $value) {
$records[][$key] = $value;
}
DataAnak::insert($records);
A: why are you trying this complex way and that even not the eloquent way to insert data into database. you should do it like below
foreach($request->nama_anak as $key => $value){
DataAnak::create([
'nama_anak' => $request->nama_anak[$key],
'gender_anak' => $request->gender_anak[$key],
'tmt' => $request->tmt[$key],
'baptis_anak' => $request->baptis_anak[$key],
'sidi_anak' => $request->sidi_anak[$key],
]);
}
no need to take the inputs into another variable and create, loop, insert separately.
A: So I think your field is array, you need to loop it like this, and insert them at once:
$datas = $request->all();
$rows = array();
foreach ($datas as $key => $data) {
foreach($data as $index => $value) {
if ($key == 'nama_anak') $rows[$index]['nama'] = $value;
if ($key == 'gender_anak') $rows[$index]['jenis_kelamin'] = $value;
if ($key == 'tmt') $rows[$index]['tempat_tgl_lahir'] = $value;
if ($key == 'baptis_anak') $rows[$index]['tgl_baptis'] = $value;
if ($key == 'tgl_sidi') $rows[$index]['sidi_anak'] = $value;
}
}
DataAnak::insert($rows);
PS: If you have multiple records, don't insert/create them in loop. It will decrease the performance | unknown | |
d5964 | train | Use the class function:
Models <- Filter( function(x) 'lm' %in% class( get(x) ), ls() )
lapply( Models, function(x) plot( get(x) ) )
(Modified slightly to handle situations where objects can have multiple classes, as pointed out by @Gabor in the comments).
Update. For completeness, here is a refinement suggested by @Gabor's comment below. Sometimes we may want to only get objects that are of class X but not class Y. Or perhaps some other combination. For this one could write a ClassFilter() function that contains all of the class filterling logic, such as:
ClassFilter <- function(x) inherits(get(x), 'lm' ) & !inherits(get(x), 'glm' )
Then you get the objects that you want:
Objs <- Filter( ClassFilter, ls() )
Now you can process the Objs whatever way you want.
A: You can use Filter with inherits and ls in mget to get a named list of in this case of lm objects.
L <- Filter(\(x) inherits(x, "lm"), mget(ls()))
#L <- Filter(\(x) inherits(x, "lm") & !inherits(x, "glm"), mget(ls())) #In case glm need to be excluded
identical(unname(L), outList)
#[1] TRUE
lapply(L, plot) | unknown | |
d5965 | train | I've seen this a few times. It generally happens when there's a context switch to another thread. So you might be stepping through thread with ID 11, you hit F10, and there's a pre-emptive context switch so now you're running on thread ID 12 and so Visual Studio merrily allows the code to continue.
There are some good debugging tips here:
Tip: Break only when a specific thread calls a method: To set a per-thread breakpoint, you need to uniquely identify a particular thread that you have given a name with its Name property. You can set a conditional breakpoint for a thread by creating a conditional expression such as "ThreadToStopOn" == Thread.CurrentThread.Name .
You can manually change the name of a thread in the Watch window by watching variable "myThread" and entering a Name value for it in the value window. If you don't have a current thread variable to work with, you can use Thread.CurrentThread.Name to set the current thread's name. There is also a private integer variable in the Thread class, DONT_USE_InternalThread, this is unique to each thread. You can use the Threads window to get to the thread you want to stop on, and in the Watch window, enter Thread.CurrentThread.DONT_USE_InternalThread to see the value of it so you can create the right conditional breakpoint expression.
EDIT: There are also some good tips here. I found this by googling for 'visual studio prevent thread switch while debugging'.
A: You ought to take a look at this KB article and consider its matching hotfix.
EDIT: the hotfix does solve these kind of debugging problems. Unfortunately, the source code changes for the hotfix didn't make it back into the main branch and VS2010 shipped with the exact same problems. This got corrected again by its Service Pack 1.
A: I find using a logfile is very handy when dealing with multiple threads.
Debugging threads is like the Huysenberg principle - observe too closely and you'll change the outcome !
A: Try this http://support.microsoft.com/kb/957912. Worked for me. | unknown | |
d5966 | train | Assuming that your UART is configured correctly, you should see messages once preloader_console_init has been run. Prior to that, you can (depending on your platform) see about getting DEBUG_UART to function in your environment. | unknown | |
d5967 | train | This is because AddFarm component is not mounted when you go to this path /anadir-granja and the reason is you forgot to put a / before anadir-granja in the path property of the Route component. It should be like this:
<Route exact path="/anadir-granja" element={<AddFarm/>}/> | unknown | |
d5968 | train | Answering own question.
Steps to create GL_TEXTURE_EXTERNAL_OES texture from RGB buffer on QNX.
1.Converting RGB to YUV422 format on CPU
2.Creating pixmap buffer using screen
EGLNativePixmapType pObjEglPixmap = ...
3.Binding pixmap to GL_TEXTURE_EXTERNAL_OES texture using EGLImageKHR object
EGLImageKHR pObjTextureEglImage = eglCreateImageKHR(eglDisplay,
EGL_NO_CONTEXT,
EGL_NATIVE_PIXMAP_KHR,
pObjEglPixmap,
NULL);
GLuint pObjTextureId;
glGenTextures(1, &pObjTextureId);
glBindTexture(GL_TEXTURE_EXTERNAL_OES, pObjTextureId);
glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES,
(GLeglImageOES)pObjTextureEglImage); | unknown | |
d5969 | train | Try this
FB.login(function(response) {
if (response.authResponse) {
FB.api('/me', function(response) {
id= response.id;
if(id==undefined)
{
alert('I am logged out');
}
else
{
alert('I am logged in');
}
})
}
});
A: First, you'll want to use response.status instead of response.session. And since the status is going to be string either way ("connected", "not_authorized", or "unknown"), you'll want to set up your if statements accordingly. The following example is from https://developers.facebook.com/docs/reference/javascript/FB.getLoginStatus/
FB.getLoginStatus(function(response) {
if (response.status === 'connected') {
// the user is logged in and has authenticated your
// app, and response.authResponse supplies
// the user's ID, a valid access token, a signed
// request, and the time the access token
// and signed request each expire
var uid = response.authResponse.userID;
var accessToken = response.authResponse.accessToken;
} else if (response.status === 'not_authorized') {
// the user is logged in to Facebook,
// but has not authenticated your app
} else {
// the user isn't logged in to Facebook.
}
});
Let me know if that makes sense, or if you have any questions :)
Update
Your script is still missing the little closure that loads the facebook sdk into your document (which is why you're getting the error that FB is not defined). Try replacing your whole script block with the following:
<script>
window.fbAsyncInit = function() {
FB.init({
appId : 1343434343, // Your FB app ID from www.facebook.com/developers/
status : true, // check login status
cookie : true, // enable cookies to allow the server to access the session
xfbml : true // parse XFBML
});
FB.getLoginStatus(function(response) {
if (response.status === 'connected') {
alert("yes");
var uid = response.authResponse.userID;
var accessToken = response.authResponse.accessToken;
} else if (response.status === 'not_authorized') {
} else {
// the user isn't logged in to Facebook.
}
});
};
// Load the SDK asynchronously
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "//connect.facebook.net/en_US/all.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
</script>
(^That last part is what loads the facebook sdk.) Try it out and let me know how it goes :) | unknown | |
d5970 | train | Use
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: {
noErrors: {disabled: true}
}
});
</script>
just before the script that loads MathJax.js itself. That will display the error messages instead of the original TeX code. | unknown | |
d5971 | train | To insert a <script> in an admin page the simplest thing to do is:
class ScribPartAdmin(model.ModelAdmin):
...
your normal stuff...
...
class Media:
js = ('/path/to/your/file.js',)
ModelAdmin media definitions documentation
Now to add the class attribute to the textarea I think the simplest way to do it is like this:
from django import forms
class ScribPartAdmin(model.ModelAdmin):
...
your normal stuff...
...
class Meta:
widgets = {'text': forms.Textarea(attrs={'class': 'mymarkup'})}
Overriding the default widgets documentation
I should add that this approach is good for a one shot use. If you want to reuse your field or JS many times, there's better ways to do it (custom widget for the field, with JS file specified if the JS is exclusively related to the field, extending template to include a JS file at many places).
A: You have to create a template, put it in templates/admin/change_form_scribpart.html with this content:
{% extends "admin/change_form.html" %}
{% load i18n %}
{% block content %}
<script type="text/javascript" src="/static/js/mymarkup.js"></script>
{{ block.super }}
{% endblock %}
Also, don't forget to activate this new admin template in your ScribPart ModelAdmin:
class ScribPartAdmin(admin.ModelAdmin):
ordering = ...
fieldsets = ...
change_form_template = "admin/change_form_scribpart.html"
A: You can send your form with json pack and get(check) with this code
results = ScribPart.all()
for r in results :
if r.test == id_text:
self.response.out.write("<script type='text/javascript' src='/static/js/"+r.name+"mymarkup.js'></script>") | unknown | |
d5972 | train | You are correct, the line you quoted for C++ effectively establishes that all threads in a C++ program see the same address space. One of the cornerstones of the C++ object model is that every living object has a unique address [intro.object]/9. Based on [intro.multithread]/1, you can pass a pointer or reference to an object created in one thread's automatic or thread-local storage to another thread and access the object from that second thread as long as the object is guaranteed to exist and there are no data races…
Interestingly, the C standard doesn't appear to explicitly give similar guarantees. However, the fact that different objects have different addresses and the address of an object is the same from the perspective of each thread in the program would still seem to be an implicit, necessary consequence of the rules of the language. C18 specifies that the address of a live object doesn't change [6.2.4/2], any object pointer can be compared to a pointer to void [6.5.9/2], and two pointers compare equal if and only if they point to the same object [6.5.9/6]. Storage class is not part of the type of a pointer. Thus, a pointer pointing to an object in the automatic storage of one thread must compare unequal to a pointer to some other object in the automatic storage of another thread, as well as to a pointer pointing to some object with different storage duration. And any two pointers pointing to the same object in automatic storage of some thread must compare equal no matter which thread got these pointers from where in what way. Thus, it can't really be that the value of a pointer means something different in different threads. Even if it may be implementation-defined whether a given thread can actually access an object in automatic storage of another thread via a pointer, I can, e.g., make a global void*, assign to it a pointer to an object of automatic storage from one thread, and, given the necessary synchronization, have another thread observe this pointer and compare it to some other pointer. The standard guarantees me that the comparison can only be true if I compare it to another pointer that points to the same object, i.e., the same object in automatic storage of the other thread, and that it must be true in this case…
I cannot give you the exact rationale behind the decision to leave it implementation-defined whether one thread can access objects in automatic storage of another thread. But one can imagine a hypothetical platform where, e.g., only the thread a stack was allocated for is given access to the pages of that stack, e.g., for security reasons. I don't know of any actual platform where this would be the case. However, an OS could easily do this, even on x86. C is already based on some, arguably, quite strong assumptions concerning the address model. I think it's a good guess that the C standards committee was simply trying to avoid adding any more restrictions on top of that… | unknown | |
d5973 | train | Your Win7 image is anti-aliased.
This is good, not bad; it makes the text smoother.
It's controlled by properties in the Graphics class. | unknown | |
d5974 | train | It should work. By default if you don't specify scope in your directive it uses the parent scope so property1 and property2 should be set. try setting the scope in your directive to false. As a side note is not a good practice what you are doing. It will be better isolate the scope and add the property as attributes. This way you will have good encapsulation.
for example
angular.module('app').directive('googlePlace', function () {
return {
restrict: 'A',
require: 'ngModel',
scope: {
property1: '=',
property2: '='
}
link: function ($scope, element, attributes, model) {
//here you have access to property 1 and 2
};
});
function MyCtrl($scope) {
$scope.property1 = null;
$scope.property2 = null;
$scope.doSave = function(){
// do some logic
console.log($scope.property1);
console.log($scope.property2);
}
}
And your html
<div ng-control="MyCtrl">
<div google-place property1='property1' property2='property2'></div>
</div>
A: I don't know what you are doing wrong, because it seems to work: http://jsfiddle.net/HB7LU/2865/
var myApp = angular.module('myApp',[]);
angular.module('myApp').directive('googlePlace', function () {
return {
restrict: 'A',
require: 'ngModel',
link: function ($scope, element, attributes, model) {
$scope.property1 = 'some val';
$scope.property2 = 'another val';
$scope.$apply();
}
}
});
angular.module('myApp').controller('MyCtrl', MyCtrl);
//myApp.directive('myDirective', function() {});
//myApp.factory('myService', function() {});
function MyCtrl($scope) {
$scope.doSave = function(){
// do some logic
console.log($scope.property1);
console.log($scope.property2);
}
} | unknown | |
d5975 | train | As described in this part of the documentation, you have to use @JSImport in your facade definition:
@JSImport("esprima", JSImport.Namespace)
For reference, @JSName defines a facade bound to a global name, while @JSImport defines a facade bound to a required JavaScript module. | unknown | |
d5976 | train | I don't think you can from within the uncaughtException do a response since that could happen even when there is no request occurring.
Express itself provides a way to handle errors within routes, like so:
app.error(function(err, req, res, next){
//check error information and respond accordingly
});
A: Per ExpressJS Error Handling, add app.use(function(err, req, res, next){ // your logic }); below your other app.use statements.
Example:
app.use(function(err, req, res, next){
console.log(err.stack);
// additional logic, like emailing OPS staff w/ stack trace
}); | unknown | |
d5977 | train | I think same probrem this.
ECONNREFUSED during 'next build'. Works fine with 'next dev'
It is working.
import {getProviders, useSession} from 'next-auth/client'
import Layout from "../components/layout";
export default function Page() {
const [session, loading] = useSession()
const [providers, setProviders] = useState({});
useEffect(() => {
(async () => {
const res = await getProviders();
setProviders(res);
})();
}, []);
return (
<Layout providers={providers}>Page index</Layout>
)
} | unknown | |
d5978 | train | You can use also use eval() to evaluate the function that you get by subs() function
f=sin(x);
a=eval(subs(f,1));
disp(a);
a =
0.8415
A: syms x
f = sin(x) ;
then if you want to assign a value to x , e.g. pi/2 you can do the following:
subs(f,x,pi/2)
ans =
1
A: You can evaluate functions efficiently by using matlabFunction.
syms s t
x =[ 2 - 5*t - 2*s, 9*s + 12*t - 5, 7*s + 2*t - 1];
x=matlabFunction(x);
then you can type x in the command window and make sure that the following appears:
x
x =
@(s,t)[s.*-2.0-t.*5.0+2.0,s.*9.0+t.*1.2e1-5.0,s.*7.0+t.*2.0-1.0]
you can see that your function is now defined by s and t. You can call this function by writing x(1,2) where s=1 and t=1. It should generate a value for you.
Here are some things to consider: I don't know which is more accurate between this method and subs. The precision of different methods can vary. I don't know which would run faster if you were trying to generate enormous matrices. If you are not doing serious research or coding for speed then these things probably do not matter. | unknown | |
d5979 | train | It is impossible, You need at least 3 points to unambiguously define a circle.
A: Since you have 2 points. Randomly choose a third. Then calculate the circle center point.
This solution meets the criteria of the circle going through the original 2 points. | unknown | |
d5980 | train | One way to approach this is to spit the logic out. First get the data to a list of X-Y, then chunk the data to rows of 8 X-Y and then save the data (ie write data to another text file)
The chunk method I've borrowed from another stack overflow answer.
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
input = [
'0,038043, 0,74061',
'0,038045, 0,73962',
'0,038047, 0,73865',
'0,038048, 0,73768',
'0,03805, 0,73672',
'0,038052, 0,73577',
'0,038053, 0,73482',
'0,038055, 0,73388',
'0,038057, 0,73295',
'0,038058, 0,73203',
'0,03806, 0,73112',
'0,038062, 0,73021',
'0,038064, 0,72931',
'0,038065, 0,72842',
'0,038067, 0,72754',
'0,038069, 0,7266'
] # Convert data to list of X-Y
for x in list(chunks(input, 8)): # 8 is the number of chunk
print(x) # This contains an array of 8 X-Y (e.g ['0,038043, 0,74061', '0,038045, 0,73962', '0,038047, 0,73865', '0,038048, 0,73768', '0,03805, 0,73672', '0,038052, 0,73577', '0,038053, 0,73482', '0,038055, 0,73388'])
... you could add your logic to save data to csv.
A: If you have a generic batcher that will work on an iterable, you can use it directly on the file object to read lists of up to 4 lines. For example:
def batcher(iterable, n):
while True:
vals = []
for i in range(n):
try:
vals.append(next(iterable))
except StopIteration:
if vals:
yield vals
return
yield vals
with open("input.txt") as f1:
with open("output.txt", "w") as f2:
for lines in batcher(f1, 4):
f2.write(' '.join((l.replace("\n", "") for l in lines)) + '\n')
A: import itertools
#i = 0
j = 0
for k in range (1,14):
print 'k =',k
with open("deflag_G{k}.inp".format(k=k)) as f1:
with open("deflag_G_{k}.inp".format(k=k),"w") as f2:
#lines = f1.readlines()
for i, line in enumerate(f1):
print i
j = j + 1
print j
b = line.split()[0]
s = float(b)
#print "b:", b
d = line.split()[1]
e = float(d)
if j == 8:
j = 0
# ('%s'%s + ' , ' + '%e'%e + '\n')
f2.write(line.rstrip('\n') + ","+ '\n')
else:
print(repr(line))
#if line.startswith(searchquery):
f2.write(line.rstrip('\n') + ", " )
#f2.write('%s'%listc + "\n")
# i = i + 1
#else :
# i = i+1
#os.close(f1)
f1.close()
f2.close() | unknown | |
d5981 | train | From your code i assume you are using a typed Dataset with the designer.
Not having a primary key is one of the many reasons the designer will not generate Insert, Update or Delete commands. This is a limitation of the CommandBuilder.
You could use the properties window to add an Update Command to the Apdapter but I would advice against that, if you later configure your main query again it will happily throw away all your work. The same argument holds against modifying the code in any *.Designer.cs file.
Instead, doubleclick on the caption with the Adaptername. The designer will create (if necessary) the accompanying non-designer source file and put the outside of a partial class in it. Unfortunately that is how far the code-generation of the designer goes in C#, you'll have to take it from there. (Aside: The VB designer knows a few more tricks).
Edit:
You will have to provide your own Update(...) method and setup an UpdateCommand etc.
var updateCommand = new SqlCommand();
... | unknown | |
d5982 | train | YouTube Data API Errors -> Global Domain Errors
dailyLimitExceeded402 A daily budget limit set by the developer has
been reached.
Billing status
This API is limited by the free quota shown below. Apply for higher quota
Quota summary
Daily quota resets at midnight Pacific Time (PT).
Free quota 50,000,000 units/day
Remaining 50,000,000 units/day 100% of total
Per-user limit 3,000 requests/second/user
The current quota displayed to you in the Google Developer console is a estimate it is not 100% accurate. If you are getting the error dailyLimitExceeded it means that you have reached your limit for the day and will have to wait until midnight PT time to run again. This is something you can test by running the request again and seeing that suddenly you have access again.
You need to either extend your quota or reduce the number of requests you make. | unknown | |
d5983 | train | For many reasons, including:
*
*There is no guarantee that FreshJuice will be a concrete class; it can be an interface or an abstract class instead.
*You might not have a default constructor available.
*You might not have any constructor available at all.
A: Because you need to create an object before initializing it.
When you call new FreshJuice(); it firsts allocates memory for the object on heap, and then initializes it.(with default values in this case as provided in corresponding default constructor) | unknown | |
d5984 | train | Use chrome.browser.openTab({ url: "" }, callback) with the "browser" permission.
https://developer.chrome.com/apps/browser#method-openTab | unknown | |
d5985 | train | You could simply extend your User#follow method to something like this:
# Follows a user.
def follow(other_user)
active_relationships.create(followed_id: other_user.id)
UserMailer.new_follower(other_user).deliver_now
end
Then add a new_follower(user) method to your UserMailer in a similar way than the already existing password_reset method. | unknown | |
d5986 | train | JPA does not allow you to reattach detached objects.
The JPA specification defines the merge() operation. The operation seems to be useful to implement the described use case.
Please refer to the specification:
3.2.7.1 Merging Detached Entity State
The merge operation allows for the propagation of state from detached entities onto persistent entities managed by the entity manager. The semantics of the merge operation applied to an entity X are as follows:
*
*If X is a detached entity, the state of X is copied onto a pre-existing managed entity instance X' of the same identity or a new managed copy X' of X is created.
*If X is a new entity instance, a new managed entity instance X' is created and the state of X is copied into the new managed entity instance X'.
*If X is a removed entity instance, an IllegalArgumentException will be thrown by the merge operation (or the transaction commit will fail).
*If X is a managed entity, it is ignored by the merge operation, however, the merge operation is cascaded to entities referenced by relationships from X if these relationships have been annotated with the cascade element value cascade=MERGE or cascade=ALL annotation.
*For all entities Y referenced by relationships from X having the cascade element value cascade=MERGE or cascade=ALL, Y is merged recursively as Y'. For all such Y referenced by X, X' is set to reference Y'. (Note that if X is managed then X is the same object as X'.)
*If X is an entity merged to X', with a reference to another entity Y, where cascade=MERGE or cascade=ALL is not specified, then navigation of the same association from X' yields a reference to a managed object Y' with the same persistent identity as Y.
The persistence provider must not merge fields marked LAZY that have not been fetched: it must ignore such fields when merging.
Any Version columns used by the entity must be checked by the persistence runtime implementation during the merge operation and/or at flush or commit time. In the absence of Version columns there is no additional version checking done by the persistence provider runtime during the merge operation.
— JSR 338: JavaTM Persistence API, Version 2.1, Final Release.
A: I guess you need JPA merge together with optimistic locks (Version based field in your entity). If the entity was changed you won't be able to save it back.
So detach it and merge back (including the version).
There is still business logic question what to do if object is changed, retry with the updated values or send an error to end user but final decision is not technology issue/ | unknown | |
d5987 | train | IIUC, Let's try Series.str.replace:
df['final'] = df['OutputValues'].str.replace(r'\d+-\d+-', '')
OutputValues CntOutputValues final
0 12-99-Annual (AE) 217 Annual (AE)
1 21-581-Ineligible Services(IPS) 210 Ineligible Services(IPS)
2 125-99-Annual (AE),126-22-Jermaine (JE) 196 Annual (AE),Jermaine (JE)
3 22-99-Annual (AE) 181 Annual (AE)
4 21-50-Prime (PE) 169 Prime (PE)
A: There are two parts to your question, one is handling the strings, and the other, applying that to the data frame.
For handling the strings, if the patterns remain the same meaning you are sure that each string will be digits-digits-chars and multiple values are separated by ',' then you can use something like this function:
def deconcat(output_value):
output_value = output_value.split(',')
result = ''
for part in output_value:
_, _, item = part.split('-')
result += item + ", "
return result.rstrip(', ')
The function takes a string, splits it by ',' if there are multiple values, then for each value, splits by '-' and adds the third part to a resulting string.
Now you only have to apply this function to the whole dataframe and create your new column:
df['final'] = df.OutputValues.apply(deconcat)
This will apply the function to each row of the OutputValues in the dataframe, and add the resulting string to a new column called 'final'. | unknown | |
d5988 | train | By using .Net framework,Udp Appender is easy to Access the Log File,Here the link
Udp Appender | unknown | |
d5989 | train | As per comments, the solution is to create an add(...) method inside of your CardStack class where in the method, add the parameter to the ArrayList. If I posted the code in this answer (which is only 3 lines of code), I'd be cheating you of the opportunity of first trying it yourself. Please check out your text book or a basic Java tutorial on methods and on passing information into methods (please see links) and give it a go. You may surprise yourself with what you come up with.
In pseudocode:
public void add method that takes a FlashCard parameter
add the FlashCard parameter to the FlashCards ArrayList
End of method | unknown | |
d5990 | train | When does the page context get destroyed?
The page scope is indistinguishable from the UI component tree.
Therefore, the page context is destroyed when JSF removes the UI
component tree (also called the view) from the session. However, when
this happens, Seam does not receive a callback and therefore the
@Destroy method on a page-scoped component never gets called. If the
user clicks away from the page or closes the browser, the page context
has to wait to get cleaned up into JSF kills the view to which it is
bound. This typically happens when the session ends or if the number
of views in the session exceeds the limit. This limit is established
using the com.sun.faces.numberOfViewsInSession and
com.sun.faces.numberOfLogicalViews context parameters in the Sun
implementation. Both default to 15. However, it's generally best not
to mess with these values.
The page scope should be seen merely as a way to keep data associated
with a view as a means of maintaining the integrity of the UI
component. This focus is especially relevant for data tables, which
have historically been problematic. I would not use the page scope as
a general storage mechanism for use case or workflow data. A good way
to think of it is as a cache.
http://www.seamframework.org/42514.lace
A: do you ever use this bean in a page?, if not, I guess the destroy will not be called because of it never be created.
or you can add @StartUp to force creating the bean when the Scope are initialized. | unknown | |
d5991 | train | import re
x='500,403,34,"hello there, this attribute has a comma in it",567'
print re.split(r""",(?=(?:[^"]*"[^"]*"[^"]*)*[^"]*$)""",x)
Output : ['500', '403', '34', '"hello there, this attribute has a comma in it"', '567']
A: Just use the existing CSV package. Example:
import csv
with open('file.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
print ', '.join(row)
A: The CSV module is the easiest way to go:
import csv
with open('input.csv') as f:
for row in csv.reader(f):
print row
For input input.csv:
500,403,34,"hello there, this attribute has a comma in it",567
500,403,34,"hello there this attribute has no comma in it",567
500,403,34,"hello there, this attribute has multiple commas, in, it",567
The output is:
['500', '403', '34', 'hello there, this attribute has a comma in it', '567']
['500', '403', '34', 'hello there this attribute has no comma in it', '567']
['500', '403', '34', 'hello there, this attribute has multiple commas, in, it', '567'] | unknown | |
d5992 | train | There's no relationship - make and bash are two separate programs that parse distinct syntaxes. That they have similar or overlapping syntactic elements is likely due to having been developed around the same time and for some similar purposes, but they don't rely on the same parser or grammar.
Many distinct languages have shared language features either for ease of adoption or from imitation. Most languages use + to mean addition, for example, but that doesn't make the languages related. | unknown | |
d5993 | train | It would be preferable to create the elements programatically.
var arrExercises = ['Push Ups', 'Dips', 'Burpees'];//add all exercises here
arrExercises.forEach(function(exercise, i){
var $exercise = $('<div>', {id:'div-excercise-'+i, "class":'exercise'});//create the exercise element
//give the on click handler to the exercise
$exercise.on('click', function(e){
$("#specifier").css("display", "block");
$(".backButton").css("display", "none");
$("#info").text(exercise);//set the info to the exercise text
});
$('#exerciseContainer').append($exercise);//append this exercise to the container
});
A: If I understand you correctly, you can get the clicked element text by using $(this).text() inside the click event.
$(function() {
$('.exercise').click(function() {
$('#info').text($(this).text());
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<!--HEADER-->
<div class="header">
<div id="info">
<p>Select Exercise</p>
<!--THIS IS WHERE I WOULD LIKE THE STRING TO UPDATE-->
</div>
</div>
<!--EXERCISE LIST-->
<div id="exerciseContainer">
<div class="exerciseL exercise">
<h3>Push Ups</h3>
</div>
<div class="exerciseR exercise">
<h3>Dips</h3>
</div>
<div class="exerciseL exercise">
<h3>Burpees</h3>
</div>
<div class="exerciseR exercise">
<h3>Plank</h3>
</div>
<div class="exerciseL exercise">
<h3>Sit Ups</h3>
</div>
<div class="exerciseR exercise">
<h3>Leg Ups</h3>
</div>
<div class="exerciseL exercise">
<h3>Russian Twists</h3>
</div>
<div class="exerciseR exercise">
<h3>Back Raises</h3>
</div>
</div>
<!--SPECIFY TIMING FOR EXERCISES-->
<div id="specifier">
<div id="containSliders">
<!--Exercise time allocator-->
<h1></h1>
<!--I WOULD LIKE THIS TO UPDATE ALSO-->
<div id="containSliderOne">
<p>Time:</p>
<output id="timeValue">60 sec.</output>
<input type="range" id="determineTime" step="10" value="60" min="0" max="180" />
</div>
<!--Exercise time allocator-->
<div id="containSliderTwo">
<p>Rest Time:</p>
<output id="restValue">10 sec.</output>
<input type="range" id="determineRest" step="10" value="10" min="0" max="180" />
</div>
<!--Add rest button-->
<div id="addBreak">
<p>Add Break</p>
</div>
<!--Back Button-->
<div id="cancel">
<a id="exerciseCancel" href="exercises.html">
<img src="images/backButtonUp.png" width="100" alt="" />
<img src="images/backButtonDown.png" width="85" alt="" />
</a>
</div>
<!--Confirm Button-->
<div id="confirm">
<a id="exerciseConfirm" href="routineOverview.html">
<img src="images/whiteTickUp.png" width="95" alt="" />
<img src="images/whiteTickDown.png" width="80" alt="" />
</a>
</div>
</div>
</div> | unknown | |
d5994 | train | You said you tried DeleteKey(int score) but it didn't work. Your code does not have the DeleteKey function anywhere. If you don't know how to use that function, the code below will show you how to use it. If you actually know how to use it but it's not working as mentioned in your question, then call PlayerPrefs.Save() after it. This should delete the key and update it right away.
To reset the score after each game, put the code in the OnDisable() function.
void OnDisable()
{
PlayerPrefs.DeleteKey("Score");
PlayerPrefs.Save();
}
To reset it when game begins, get the current score like you did in the Awake() function then change the function above to OnEnable().
A: The problem you still get the previous session of your score so you need to reset the saved values by resetting the value back to zero ,by using the line :
PlayerPrefs.SetInt("Score", 0);
public static int score ;
Text text;
void Start(){
PlayerPrefs.SetInt("Score", 0);
// Set up the reference.
text = GetComponent<Text>();
score = 0;
score = PlayerPrefs.GetInt("Score",0);
}
void Update ()
{
// Set the displayed text to be the word "Score" followed by the score value.
text.text = "Score: " + score;
PlayerPrefs.SetInt("Score", score);
} | unknown | |
d5995 | train | I finally figured it out. The error was caused because the Oculus was plugged into the dedicated GPU and the monitor for the desktop was plugged into the on-chip Intel GPU. It was resolved when I plugged both of them into the NVIDIA GPU. | unknown | |
d5996 | train | You're over complicating it :
var tag = function(o) {
Object.defineProperty(o, '__tagged', {
enumerable: false,
configurable: false,
writable: false,
value: "static"
});
return o;
}
var isTagged = function(o) {
return Object.getOwnPropertyNames(o).indexOf('__tagged') > -1;
}
A: I think you're overcomplicating all of this. There's no reason you need to store the tag on the object itself. If you create a separate object that uses the object's pointer as a key, not only will you conserve space, but you'll prevent any unintentional collisions should the arbitrary object happen to have a property named "_tagged".
var __tagged = {};
function tag(obj){
__tagged[obj] = true;
return obj;
}
function isTagged(obj){
return __tagged.hasOwnProperty(obj);
}
function getTagged(obj){
if(isTagged(obj)) return obj;
}
== EDIT ==
So I decided to take a minute to create a more robust tagging system. This is what I've created.
var tag = {
_tagged: {},
add: function(obj, tag){
var tags = this._tagged[obj] || (this._tagged[obj] = []);
if(tag) tags.push(tag);
return obj;
},
remove: function(obj, tag){
if(this.isTagged(obj)){
if(tag === undefined) delete this._tagged[obj];
else{
var idx = this._tagged[obj].indexOf(tag);
if(idx != -1) this._tagged[obj].splice(idx, 1);
}
}
},
isTagged: function(obj){
return this._tagged.hasOwnProperty(obj);
},
get: function(tag){
var objects = this._tagged
, list = []
;//var
for(var o in objects){
if(objects.hasOwnProperty(o)){
if(objects[o].indexOf(tag) != -1) list.push(o);
}
}
return list;
}
}
Not only can you tag an object, but you can actually specify different types of tags and retrieve objects with specific tags in the form of a list. Let me give you an example.
var a = 'foo'
, b = 'bar'
, c = 'baz'
;//var
tag.add(a);
tag.add(b, 'tag1');
tag.add(c, 'tag1');
tag.add(c, 'tag2');
tag.isTagged(a); // true
tag.isTagged(b); // true
tag.isTagged(c); // true
tag.remove(a);
tag.isTagged(a); // false
tag.get('tag1'); // [b, c]
tag.get('tag2'); // [c]
tag.get('blah'); // []
tag.remove(c, 'tag1');
tag.get('tag1'); // [b] | unknown | |
d5997 | train | To show ads from inside other classes not from the main activity you need to use a facade. Basically, you make use of a listener to load/display the ads.
Follow this libgdx official tutorial guide. It covers both banner and interstitial ads and it isn't outdated. It uses the new admob via the google play services | unknown | |
d5998 | train | if you don't override your plugins render method (2.4 and up), you'll have your plugin as instance in your context. using the following, you'll get the 1 based position of your plugin:
{{ instance.get_position_in_placeholder }}
also interesting: is_first_in_placeholder and is_last_in_placeholder. in fact, @paulo already showed you the direction in his comment ;) this is the code, with new line number: https://github.com/divio/django-cms/blob/develop/cms/models/pluginmodel.py#L382 | unknown | |
d5999 | train | You must tag your image with the Docker Registry URL and then push like this:
docker tag design-service dockerregistry.azurecr.io/design-service
docker push dockerregistry.azurecr.io/design-service
Note: The correct term is registry and not repository. A Docker registry holds repositories of tagged images. | unknown | |
d6000 | train | One straight-forward use case is a thread processing a batch of elements, occasionally trying to commit the elements that have been processed. If acquiring the lock fails, the elements will be committed in the next successful attempt or at the final, mandatory commit.
Another example can be found within the JRE itself, ForkJoinTask.helpExpungeStaleExceptions() is a method for performing a task that can be done by an arbitrary thread, but only one at a time, so only the one thread successfully acquiring the lock will perform it, all others will return, as the unavailability of the lock implies that there is already a thread performing the task.
It is possible to implement a similar feature before Java 5, if you separate the intrinsic locking feature, which doesn’t support being optional, from the locking logic, that can be represented as an ordinary object state. This answer provides an example.
A:
My question is, does this use case not existed before Java 5 or folks used to implement it via some other techniques?
The Lock interface was added in Java 5, is that what you mean? Not sure what was there before.
I am not able to comprehend the need to execute perform alternative actions based on lock availability. Can somebody please explain real use cases for this?
Sure. Just wrote one of these today actually. My specific Lock implementation is a distributed lock that is shared among a cluster of servers using the Jgroups protocol stack. The lock.tryLock(...) method makes RPC calls to the cluster and waits for responses. It is very possible that multiple nodes maybe trying to lock and their actions might clash causing delays and certainly one lock to fail. This either could return false or timeout in which case my code just waits and tries again. My code is literally:
if (!clusterLock.tryLock(TRY_LOCK_TIME_MILLIS, TimeUnit.MILLISECONDS)) {
logger.warn("Could not lock cluster lock {}", beanName);
return;
}
Another use case might be a situation where one part of the code holds a lock for a large amount of time and other parts of the code might not want to wait that long and instead want to get other work done.
Here's another place in my code where I'm using tryLock(...)
// need to wait for the lock but log
boolean locked = false;
for (int i = 0; i < TRY_LOCK_MAX_TIMES; i++) {
if (lock2.tryLock(100, TimeUnit.MILLISECONDS)) {
logger.debug("Lock worked");
locked = true;
break;
} else {
logger.debug("Lock didn't work");
}
}
A: The reason for writing code like that example is if you have a thread that is doing more than one job.
Imagine you put it in a loop:
while (true) {
if (taskA_needsAttention() && taskA_lock.tryLock()) {
try {
...do some work on task A...
} finally {
taskA_lock.unlock();
}
} else if (taskB_needsAttention() && taskB_lock.tryLock()) {
try {
...do some work on task B...
} finally {
taskB_lock.unlock();
}
} else ...
}
Personally, I would prefer not to write code like that. I would prefer to have different threads responsible for task A and task B or better still, to use objects submitted to a thread pool.
A: Use case 1
One use case would be to completely avoid running the thread. Like in the example below, for example a very strict internet hotspot where you can only access one webpage at once, and other requests are cancelled.
With synchronized you cannot cancel it, since it waits until it can obtain the lock. So tryLock just gives you flexibility to cancel something or to run other behavior instead.
package Concurrency;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LimitedHotspotDemo
{
private static Lock webAccessLock = new ReentrantLock();
private static class AccessUrl implements Runnable
{
private String url;
public AccessUrl(String url)
{
this.url = url;
}
@Override
public void run()
{
if(webAccessLock.tryLock()) {
System.out.println("Begin request for url " + url);
try {
Thread.sleep(1500);
System.out.println("Request completed for " + url);
} catch (InterruptedException e) {
webAccessLock.unlock();
return;
} finally {
webAccessLock.unlock();
}
} else {
System.out.println("Cancelled request " + url + "; already one request running");
}
}
}
public static void main(String[] args)
{
for(String url : Arrays.asList(
"https://www.google.com/",
"https://www.microsoft.com/",
"https://www.apple.com/"
)) {
new Thread(new AccessUrl(url)).start();
}
}
}
Output:
Begin request for url https://www.microsoft.com/
Cancelled request https://www.google.com/; already one request running
Cancelled request https://www.apple.com/; already one request running
Request completed for https://www.microsoft.com/
Use case 2
Another use case would be a light sensor which keeps the light on when there is movement in the room (with Sensor thread). There is another thread (TurnOffLights) running to switch the light off when there is no more movement in the room for a few seconds.
The TurnOffLights thread uses tryLock to obtain a lock. If no lock can be obtained, the process is delayed for 500ms. The last Sensor thread is blocking the lock for 5 seconds, after which the TurnOffLights thread can obtain the lock and turn off the lights.
So in this case the TurnOffLights thread is only allowed to turn off the lights when there are no more signals to the Sensor for 5 seconds. The TurnOffLights thread is using tryLock to obtain the lock.
package Concurrency;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LightSensorDemo
{
private static volatile Lock lock = new ReentrantLock();
private static volatile Thread lastSignal = null;
private static Sensor sensor = new Sensor();
private static class Sensor implements Runnable
{
private static Boolean preparing = false;
public static Boolean isPreparing()
{
return preparing;
}
@Override
public void run()
{
System.out.println("Signal send " + Thread.currentThread().getName());
try {
invalidatePreviousSignalsAndSetUpCurrent();
Thread.sleep(5 * 1000);
} catch (InterruptedException e) {
//System.out.println("Signal interrupted " + Thread.currentThread().getName());
return;
} finally {
lock.unlock();
}
}
private static synchronized void invalidatePreviousSignalsAndSetUpCurrent() throws InterruptedException
{
preparing = true;
if(lastSignal != null) {
lastSignal.interrupt();
}
lastSignal = Thread.currentThread();
lock.lockInterruptibly();
preparing = false;
}
}
private static class TurnOffLights implements Runnable
{
@Override
public void run()
{
while(true) {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
System.out.println("Interrupted" + this.getClass().getName());
return;
}
if (!Sensor.isPreparing()) {
if(lock.tryLock()) {
try {
System.out.println("Turn off lights");
break;
} finally {
lock.unlock();
}
} else {
System.out.println("Cannot turn off lights yet");
}
} else {
System.out.println("Cannot turn off lights yet");
}
}
}
}
public static void main(String[] args) throws InterruptedException
{
Thread turnOffLights = new Thread(new TurnOffLights());
turnOffLights.start();
//Send 40 signals to the light sensor to keep the light on
for(int x = 0; x < 10; x++) {
new Thread(sensor).start(); //some active movements
new Thread(sensor).start(); //some active movements
new Thread(sensor).start(); //some active movements
new Thread(sensor).start(); //some active movements
Thread.sleep(250);
}
turnOffLights.join();
}
}
Notice also that I use lock.lockInterruptibly(); to interrupt previous signals. So the 5 second countdown always starts from the last signal.
Output is something like:
...
Cannot turn off lights yet
Cannot turn off lights yet
Signal send Thread-19
Signal send Thread-20
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Cannot turn off lights yet
Turn off lights
Process finished with exit code 0 | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.