David Catuhe Download Use Kinect As Mouse
Kinect Keyboard Simulator & Kinect Sabre for Kinect For Windows SDK one.0
Following the official release of Kinect for Windows SDK 1.0 and Kinect Toolbox, I'm pleased to share with you 2 (useful ?) samples I wrote:
https://world wide web.catuhe.com/msdn/kinecttools.zip
Kinect Keyboard Simulator
This tool allows you to send keys to a specified application when gestures are detected. An obvious usage is to change slides on Powerpoint when you make a swipe to left or to right.
Kinect Sabre
Kinect Sabre is THE mandatory tool for your Kinect!! It creates a reality augmented vision of yourself with a Lite SABER in your left hand!!! (Warning: Y'all need to install XNA Game Studio 4.0 to apply Kinect Sabre)
Hope you like these tools
MishraReader i.0.2.0 (beta 2) is out!
After several months of coding, we are proud to denote the beta 2 of MishraReader. This version is a full rewrite of the beta1 to include MVVM and dependencies injection blueprint patterns.
But the major characteristic of this version is the reshaping of the user interface to support METRO guidelines:
You can catch information technology freely here (MishraReader is available in french and english):
https://mishrareader.codeplex.com
To use it, but follow these instructions:
Connectedness
On this screen, yous only take to type your Google Reader account information and click on [SIGN IN]:
Using MishraReader
One time connected, you are able to run across unread/starred or all posts:
Using the [Prove] dropdown menu, you lot can select only the subscription y'all want to read:
When y'all are reading a post, you can:
Configuring MishraReader
Using the settings menu, you tin can configure dissimilar options (don't forget to click on [Relieve CHANGES]):
Account
In the business relationship screen, y'all are able to disconnect current account and decide if you want to automatically mark items as read when you select them:
Sharing Services & Bookmark Services
With this screen, you lot tin configure sharing and bookmark services.
Brandish
MishraReader can use 8 different accent colors that you can select using the Brandish screen.
You can besides choose to:
- Evidence simply posts summary (instead of using a web view of the full mail service) which is quicker to load
- Utilize a notification icon and in this case show or hibernate the main window in the taskbar
Network
The network screen allows you to select:
- the quantity of item downloaded per request (between x and 500)
- the automatic refresh interval (between 1 infinitesimal and 1 hr)
Conclusion
I hope you will like this version as we worked hard to make it the meliorate feeds reader bachelor!
Exercise not hesitate to give usa your feedbacks using the https://mishrareader.codeplex.com site
Official Kinect for Windows SDK and Kinect Toolbox ane.1.one are out!
The official Kinect for WIndows SDK is out and you tin can grab it here:
https://www.microsoft.com/en-us/kinectforwindows/develop/overview.aspx
The central points are:
- As long equally yous use a Kinect for Windows sensor (not the XBox360 one) and the official SDK, you tin can develop commercial applications using Kinect technologies.
- New near way for depth values (no skeleton tracking in this version) which enables the depth camera to run across objects equally close every bit 40 centimeters
- Up to 4 sensors tin can be connected to the aforementioned figurer
Alongside with the SDK a new sensor is bachelor. If you desire to buy it ($249.99), y'all can get there:
https://world wide web.microsoft.com/en-us/kinectforwindows/purchase/
Of course the Kinect Toolbox ane.i.one is also out and supports the final version of Kinect for Windows SDK:
https://kinecttoolbox.codeplex.com/
The NuGet package can be constitute in that location:
https://nuget.org/List/Packages/KinectToolbox
Utilize the ability of Azure to create your ain raytracer
The power available in the cloud is growing every 24-hour interval. So I decided to use this raw CPU power to write a small raytracer.
I'm certainly not the first one to have had this idea as for example Pixar or GreenButton already apply Azure to render pictures.
During this commodity, nosotros will see how to write our own rendering system using Azure in order to be able to realize your own 3D rendered pic.
The article will be organized effectually the following centrality:
- Prerequisites
- Architecture
- Deploying to Azure
- Defining a scene
- Web server and workers roles
- How information technology works?
- JavaScript customer
- Conclusion
- To get further
The final solution tin can be downloaded here and if you want to see the terminal upshot, please go in that location: https://azureraytracer.cloudapp.internet/
You can utilise a default scene or create your own scene definition (we will meet later on how to exercise that).
The rendered pictures are limited to a 512×512 resolution (you tin can of class alter this settings).
Prerequisites
To be able to use the project, y'all must have:
- a Visual Studio 2010 version (Express version is supported): https://www.microsoft.com/visualstudio/en-us/products/2010-editions
- Windows Azure SDK: https://www.windowsazure.com/en-u.s./develop/downloads/
Yous will as well need an Azure account. You lot can get a free one just there: https://www.windowsazure.com/en-the states/pricing/free-trial/
Architecture
Our architecture tin can exist defined using the following schema:
The client will connect to a spider web server equanimous of one or more web roles (in my example, at that place are two spider web roles). The web roles will provide the web pages and a web service used to get the status of a asking. When an user wants to render a picture, the associated web role will write a render message in an Azure queue. A farm of worker roles will read the same queue and volition process any incoming render message. Azure queues are transactionals and diminutive so simply i worker role will grab the order. The first available worker will read and remove the message. Every bit queues are transactionals, if the worker role crashes the render bulletin is reintroduced in club to avoid losing your piece of work.
In our sample, I decided to apply a semaphore in social club to limit the maximum number of requests executed concurrently. Indeed, I adopt not to overload my workers in order to requite maximum CPU ability to each render task.
Deploying to Azure
Later on opening the solution, you will be able to launch it direct from Visual Studio inside the Azure Emulator. Yous will be and then able to debug and fine melody your code before sending it to the product phase.
Once y'all're gear up, yous can deploy your package on your Azure account using the following procedure:
- Open the "AzureRaytracer.sln" __solution within Visual Studio
- Configure your Azure account: to do so, correct click on the "AzureRaytracer" project and choose "Publish" carte du jour. You volition become the following screen:
- Using this screen, please choose "Sign in to download credentials" option which will let you download an automated configuration file on your Azure account :
- In one case the file is downloaded, we volition import it inside the :
- After importing the data, Visual Studio will inquire you to give a name for the service:
- The next screen will present a summary of all selected options:
- Earlier publishing, we must modify some parameters to ready our bundle to the product stage. First of all, we accept to get the Azure portal: https://windows.azure.com. Get the the storage accounts tab to take hold of required information:
- On the right pane, you tin can go the master admission fundamental:
- With this information, you can go to your projection:
- On every function, you take to go to the settings menu in club to define the Azure connectedness cord (you volition use here the information grabbed on the Azure portal):
- You must change the "AzureStorage" value using the "…" button:
- In the Configuration tab, you lot can change the instance count for each role:
- For more information about instance sizes: https://msdn.microsoft.com/en-US/library/ee814754.aspx)
- Finally you volition be able to publish your packet:
Your raytracer is at present ONLINE !!! We will no meet how to utilize information technology
Defining a scene
To define a scene, you have to specify it using an xml file. Here is a sample scene:
-
<? xml version ="one.0" encoding ="utf-8" ?>
-
< scene FogStart ="five" FogEnd ="20" FogColor ="0, 0, 0" ClearColor ="0, 0, 0" AmbientColor ="0.one, 0.1, 0.1">
-
< objects >
-
< sphere Name ="Red Sphere" Middle ="0, 1, 0" Radius ="1">
-
< defaultShader Diffuse ="i, 0, 0" Specular ="i, 1, 1" ReflectionLevel ="0.6"/>
-
</ sphere >
-
< sphere Proper name ="Transparent Sphere" Middle ="-3, 0.v, 1.5" Radius ="0.5">
-
< defaultShader Diffuse ="0, 0, i" Specular ="1, 1, 1" OpacityLevel ="0.iv" RefractionIndex ="2.viii"/>
-
</ sphere >
-
< sphere Proper name ="Green Sphere" Center ="-3, ii, 4" Radius ="1">
-
< defaultShader Lengthened ="0, 1, 0" Specular ="1, one, 1" ReflectionLevel ="0.6" SpecularPower ="10"/>
-
</ sphere >
-
< sphere Proper name ="Yellow Sphere" Center ="-0.five, 0.three, -2" Radius ="0.three">
-
< defaultShader Lengthened ="1, 1, 0" Specular ="i, i, 1" Emissive ="0.3, 0.three, 0.three" ReflectionLevel ="0.half-dozen"/>
-
</ sphere >
-
< sphere Proper name ="Orange Sphere" Centre ="one.5, two, -ane" Radius ="0.5">
-
< defaultShader Diffuse ="one,0.v, 0" Specular ="i, 1, 1" ReflectionLevel ="0.half-dozen"/>
-
</ sphere >
-
< sphere Name ="Gray Sphere" Center ="-2, 0.2, -0.5" Radius ="0.2">
-
< defaultShader Diffuse ="0.5, 0.5, 0.5" Specular ="1, i, 1" ReflectionLevel ="0.6" SpecularPower ="ane"/>
-
</ sphere >
-
< footing Proper noun ="Aeroplane" Normal ="0, 1, 0" Offset ="">
-
< checkerBoard WhiteDiffuse ="1, 1, one" BlackDiffuse ="0.i, 0.1, 0.1" WhiteReflectionLevel ="0.1" BlackReflectionLevel ="0.5"/>
-
</ footing >
-
</ objects >
-
< lights >
-
< light Position ="-ii, 2.five, -1" Color ="one, 1, 1"/>
-
< calorie-free Position ="one.5, two.5, 1.five" Colour ="0, 0, 1"/>
-
</ lights >
-
< camera Position ="0, 2, -6" Target ="-0.5, 0.5, 0" />
-
</ scene >
The file structure is the following:
- A [scene] tag is used as root tag and allows you lot to ascertain the following parameters:
- FogStart / FogEnd : Define the range of the fog from the photographic camera.
- FogColor : RGB color of the fog
- ClearColor : Background RGB color
-
AmbientColor : Ambient RGB
-
A [objects] tag which contains the objects list
- A [lights] tag which contains the lights list
- A [camera] tag which ascertain the scene camera. It is our point of view, divers by the following parameters:
- Position : Photographic camera position (X,Y,Z)
- Target : Camera target (Ten, Y, Z)
All objects are defined by a proper name and tin can be of ane of the post-obit type:
- sphere : Sphere defined by its center and radius
- ground : Plane representing the ground defined by its beginning from 0 and the management of its normal
- mesh : Circuitous object divers by a list of vertices and faces. It tin be manipulated with 3 vectors:Position, Rotation and Scaling:
-
< mesh Name ="Box" Position ="-three, 0, 2" Rotation ="0, 0.7, 0">
-
< vertices count ="24">-1, -one, -one, -1, 0, 0,-1, -1, ane, -1, 0, 0,-1, 1, 1, -1, 0, 0,-1, one, -ane, -1, 0, 0,-1, 1, -i, 0, 1, 0,-1, 1, 1, 0, 1, 0,i, one, one, 0, ane, 0,one, 1, -1, 0, i, 0,i, 1, -one, one, 0, 0,1, 1, 1, i, 0, 0,one, -ane, 1, 1, 0, 0,1, -1, -1, 1, 0, 0,-one, -1, 1, 0, -1, 0,-1, -1, -1, 0, -1, 0,i, -1, -1, 0, -one, 0,i, -1, 1, 0, -1, 0,-1, -1, 1, 0, 0, 1,1, -1, 1, 0, 0, 1,1, 1, 1, 0, 0, ane,-1, 1, 1, 0, 0, 1,-one, -one, -one, 0, 0, -1,-1, 1, -one, 0, 0, -1,1, 1, -1, 0, 0, -1,1, -1, -1, 0, 0, -1,</ vertices >
-
< indices count ="36">0,1,two,2,iii,0,4,v,6,6,7,4,8,ix,ten,x,11,viii,12,thirteen,14,fourteen,xv,12,sixteen,17,18,18,19,16,20,21,22,22,23,20,</ indices >
-
</ mesh >
Faces are indexes to vertices. A face contains iii vertices and each vertex is defined by two vectors: position (X, Y, Z) and normal (Nx, Ny, Nz).
Objects can have a child node used to ascertain the applied materials:
- defaultShader : Default textile divers by:
- Diffuse : Base RGB color
- Ambient : Ambiant RGB colour
- Specular : Specular RGB color
- Emissive : Emissive RGB color
- SpecularPower : Sharpness of the specular
- RefractionIndex : Refraction index (you must too define OpacityLevel to use it)
- OpacityLevel : Opacity level (you lot must besides define RefractionIndex to employ information technology)
-
ReflectionLevel : Reflection level (0 = no reflection)
-
checkerBoard : material defining a checkerboard with the following backdrop:
- WhiteDiffuse : "White" square diffuse colour
- WhiteAmbient : "White" square ambient colour
- WhiteReflectionLevel : "White" square reflection level
- BlackDiffuse : "Black" foursquare diffuse colour
- BlackAmbient : "Blackness" foursquare ambient color
- BlackReflectionLevel : "Black" square reflection color
Lights are divers via the [light] tag which tin can accept Position and Color attributes. Lights are omnidirectionals.
Finally, if nosotros use this scene file:
-
<? xml version ="1.0" encoding ="utf-8" ?>
-
< scene FogStart ="5" FogEnd ="20" FogColor ="0, 0, 0" ClearColor ="0, 0, 0" AmbientColor ="1, 1, one">
-
< objects >
-
< basis Name ="Airplane" Normal ="0, 1, 0" Beginning ="">
-
< defaultShader Diffuse ="0.iv, 0.4, 0.iv" Specular ="1, one, ane" ReflectionLevel ="0.3" Ambient ="0.5, 0.5, 0.5"/>
-
</ basis >
-
< sphere Name ="Sphere" Center ="-0.5, 1.5, 0" Radius ="1">
-
< defaultShader Lengthened ="0, 0, 1" Specular ="one, 1, ane" ReflectionLevel ="" Ambient ="i, 1, 1"/>
-
</ sphere >
-
</ objects >
-
< lights >
-
< light Position ="-0.v, ii.5, -2" Color ="1, 1, 1"/>
-
</ lights >
-
< camera Position ="0, 2, -6" Target ="-0.v, 0.v, 0" />
-
</ scene >
We volition obtain the following picture show:
Web server and worker roles
The web server is running nether ASP.Internet and will provide two functionalities:
- Connection to worker roles using the queue in society to launch a rendering:
-
void Render(string scene)
-
{
-
attempt
-
{
-
InitializeStorage();
-
var guid = Guid.NewGuid();
-
CloudBlob blob = Container.GetBlobReference(guid + ".xml");
-
hulk.UploadText(scene);
-
hulk = Container.GetBlobReference(guid + ".progress");
-
blob.UploadText("-ane");
-
var message = new CloudQueueMessage(guid.ToString());
-
queue.AddMessage(message);
-
guidField.Value = guid.ToString();
-
}
-
catch (Exception ex)
-
{
-
System.Diagnostics.Trace.WriteLine(ex.ToString());
-
}
-
}
As you can come across, the web server will generate for each asking a GUID to identify the rendering job. Subsequently, the description of the scene (the xml file) is copied to a blob (with the GUID as name) in order to allow the worker roles to access it. Finally a message is sent to the queue and a blob is created to requite a feedback on the asking progress.
- Publish a web service to expose requests progress:
-
[OperationContract]
-
[WebGet]
-
public string GetProgress(string guid)
-
{
-
try
-
{
-
CloudBlob hulk = _Default.Container.GetBlobReference(guid + ".progress");
-
string result = blob.DownloadText();
-
if (result == "101")
-
hulk.Delete();
-
render result;
-
}
-
catch (Exception ex)
-
{
-
return ex.Message;
-
}
-
}
The web service will get the content of the blob and return the upshot. If the request is queued, the value will be –1 and if the request is finished the value will be 101 (and in this case the blob will exist deleted).
The worker roles will read the content of the queue and when a message is available, a worker will get it and will handle it:
-
while (true)
-
{
-
CloudQueueMessage msg = null;
-
semaphore.WaitOne();
-
try
-
{
-
msg = queue.GetMessage();
-
if (msg != null)
-
{
-
queue.DeleteMessage(msg);
-
string guid = msg.AsString;
-
CloudBlob blob = container.GetBlobReference(guid + ".xml");
-
string xml = blob.DownloadText();
-
CloudBlob blobProgress = container.GetBlobReference(guid + ".progress");
-
blobProgress.UploadText("0");
-
WorkingUnit unit of measurement = new WorkingUnit();
-
unit of measurement.OnFinished += () =>
-
{
-
blob.Delete();
-
unit of measurement.Dispose();
-
semaphore.Release();
-
};
-
unit of measurement.Launch(guid, xml, container);
-
}
-
else
-
{
-
semaphore.Release();
-
}
-
Thread.Slumber(k);
-
}
-
grab (Exception ex)
-
{
-
semaphore.Release();
-
if (msg != null)
-
{
-
CloudQueueMessage newMessage = new CloudQueueMessage(msg.AsString);
-
queue.AddMessage(newMessage);
-
}
-
Trace.WriteLine(ex.ToString());
-
}
-
}
Once the scene is loaded, the worker will update the progress state (using the associated blob) and will create a WorkingUnit which will be in accuse of producing asynchronously the motion-picture show. It will enhance a OnFinished upshot when the return is done in lodge to make clean and dispose all associated resources.
We can too encounter here the usage of the semaphore in gild to limit the number of concurrent renders.
The WorkingUnit is mainly defined like this:
-
public void Launch(string guid, string xml, CloudBlobContainer container)
-
{
-
try
-
{
-
XmlDocument xmlDocument = new XmlDocument();
-
xmlDocument.LoadXml(xml);
-
XmlNode sceneNode = xmlDocument.SelectSingleNode("/scene");
-
Scene scene = new Scene();
-
scene.Load(sceneNode);
-
ParallelRayTracer renderer = new ParallelRayTracer();
-
resultBitmap = new Bitmap(RenderWidth, RenderHeight, PixelFormat.Format32bppRgb);
-
bitmapData = resultBitmap.LockBits(new Rectangle(0, 0, RenderWidth, RenderHeight), ImageLockMode.WriteOnly, PixelFormat.Format32bppRgb);
-
int bytes = Math.Abs(bitmapData.Stride) bitmapData.Top;
-
byte[] rgbValues = new byte[bytes];
-
IntPtr ptr = bitmapData.Scan0;
-
renderer.OnAfterRender += (obj, evt) =>
-
{
-
System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes);
-
resultBitmap.UnlockBits(bitmapData);
-
using (MemoryStream ms = new MemoryStream())
-
{
-
resultBitmap.Salvage(ms, ImageFormat.Png);
-
ms.Position = 0;
-
CloudBlob finalBlob = container.GetBlobReference(guid + ".png");
-
finalBlob.UploadFromStream(ms);
-
CloudBlob blob = container.GetBlobReference(guid + ".progress");
-
blob.UploadText("101");
-
}
-
OnFinished();
-
};
-
int previousPercentage = -10;
-
renderer.OnLineRendered += (obj, evt) =>
-
{
-
if (evt.Percentage – previousPercentage < ten)
-
return;
-
previousPercentage = evt.Percentage;
-
CloudBlob blob = container.GetBlobReference(guid + ".progress");
-
hulk.UploadText(evt.Percent.ToString());
-
};
-
renderer.Render(scene, RenderWidth, RenderHeight, (x, y, color) =>
-
{
-
var offset = x iv + y bitmapData.Stride;
-
rgbValues[offset] = (byte)(color.B 255);
-
rgbValues[offset + one] = (byte)(color.Grand 255);
-
rgbValues[get-go + 2] = (byte)(color.R 255);
-
});
-
}
-
take hold of (Exception ex)
-
{
-
CloudBlob blob = container.GetBlobReference(guid + ".progress");
-
blob.DeleteIfExists();
-
hulk = container.GetBlobReference(guid + ".png");
-
blob.DeleteIfExists();
-
Trace.WriteLine(ex.ToString());
-
}
-
}
The WorkingUnit works according to the following algorithm:
- Loading the scene
- Creating the raytracer
- Generating the picture and accessing the bytes assortment
- When the movie is rendered, we can salve it in a hulk and nosotros update the job progress land
- Launching the render
The raytracer
The raytracer is entirely written in C# 4.0 and uses TPL (Job Parallel Libray) to enable parallel code execution.
The post-obit functionalities are supported (only as Yoda said "Obvious is the code", so do not hesitate to browse the code):
- Fog
- Diffuse
- Ambient
- Transparency
- Reflection
- Refraction
- Shadows
- Complex objects
- Unlimited light sources
- Antialiasing
- Parallel rendering
- Octrees
The interesting bespeak with a raytracer is that it is a massively parallelizable process. Indeed, a raytracer will execute strictly the aforementioned code for each pixel of the screen.
So the key betoken of the raytracer is:
-
Parallel.For(0, RenderHeight, y => ProcessLine(scene, y));
And so for each line, nosotros volition execute the following method in parallel on all CPU cores of the computer:
-
void ProcessLine(Scene scene, int line)
-
{
-
for (int 10 = 0; x < RenderWidth; x++)
-
{
-
if (!renderInProgress)
-
return;
-
RGBColor color = RGBColor.Black;
-
if (SuperSamplingLevel == 0)
-
{
-
color = TraceRay(new Ray { First = scene.Photographic camera.Position, Direction = GetPoint(ten, line, scene.Camera) }, scene, 0);
-
}
-
else
-
{
-
int count = 0;
-
double size = 0.4 / SuperSamplingLevel;
-
for (int sampleX = -SuperSamplingLevel; sampleX <= SuperSamplingLevel; sampleX += 2)
-
{
-
for (int sampleY = -SuperSamplingLevel; sampleY <= SuperSamplingLevel; sampleY += two)
-
{
-
color += TraceRay(new Ray { Start = scene.Photographic camera.Position, Direction = GetPoint(x + sampleX size, line + sampleY size, scene.Photographic camera) }, scene, 0);
-
count++;
-
}
-
}
-
if (SuperSamplingLevel == i)
-
{
-
color += TraceRay(new Ray { Commencement = scene.Photographic camera.Position, Direction = GetPoint(10, line, scene.Camera) }, scene, 0);
-
count++;
-
}
-
color = color / count;
-
}
-
color.Clamp();
-
storePixel(ten, line, color);
-
}
-
// Report progress
-
lock (this)
-
{
-
linesProcessed++;
-
if (OnLineRendered != null)
-
OnLineRendered(this, new LineRenderedEventArgs { Per centum = (linesProcessed * 100) / RenderHeight, LineRendered = line });
-
}
-
}
The main part is the TraceRay method which will cast a ray for each pixel of a line:
-
private RGBColor TraceRay(Ray ray, Scene scene, int depth, SceneObject excluded = null)
-
{
-
List<Intersection> intersections;
-
if (excluded == null)
-
intersections = IntersectionsOrdered(ray, scene).ToList();
-
else
-
intersections = IntersectionsOrdered(ray, scene).Where(intersection => intersection.Object != excluded).ToList();
-
return intersections.Count == 0 ? scene.ClearColor : ComputeShading(intersections, scene, depth);
-
}
If the ray intersects no object then the color of the background is returned (ClearColor). In the other example, we will have to evaluate the color of the intersected object:
-
individual RGBColor ComputeShading(List<Intersection> intersections, Scene scene, int depth)
-
{
-
Intersection intersection = intersections[0];
-
intersections.RemoveAt(0);
-
var direction = intersection.Ray.Direction;
-
var position = intersection.Position;
-
var normal = intersection.Normal;
-
var reflectionDirection = direction – 2 Vector3.Dot(normal, direction) normal;
-
RGBColor result = GetBaseColor(intersection.Object, position, normal, reflectionDirection, scene, depth);
-
// Opacity
-
if (IsOpacityEnabled && intersections.Count > 0)
-
{
-
double opacity = intersection.Object.Shader.GetOpacityLevelAt(position);
-
double refractionIndex = intersection.Object.Shader.GetRefractionIndexAt(position);
-
if (opacity < 1.0)
-
{
-
if (refractionIndex == 1 || !IsRefractionEnabled)
-
result = result opacity + ComputeShading(intersections, scene, depth) (ane.0 – opacity);
-
else
-
{
-
// Refraction
-
issue = result opacity + GetRefractionColor(position, Utilities.Refract(direction, normal, refractionIndex), scene, depth, intersection.Object) (1.0 – opacity);
-
}
-
}
-
}
-
if (!IsFogEnabled)
-
return event;
-
// Fog
-
double distance = (scene.Camera.Position – position).Length;
-
if (altitude < scene.FogStart)
-
return result;
-
if (distance > scene.FogEnd)
-
return scene.FogColor;
-
double fogLevel = (distance – scene.FogStart) / (scene.FogEnd – scene.FogStart);
-
return result (1.0 – fogLevel) + scene.FogColor fogLevel;
-
}
The ComputeShading method will compute the base of operations color of the object (taking in account all calorie-free sources). If the object is transparent or uses refraction or reflection, a new ray must exist casted to compute the induced color.
At the terminate, the fog is added and the last colour is returned.
Every bit yous can see, calculating each pixel is really resources consuming. So having a huge raw power tin can drastically improve the rendering speed.
The customer
The front client is written using HTML with a pocket-size part of JavaScript in order to make information technology a fleck more dynamic:
-
var checkState = office () {
-
$.getJSON("RenderStatusService.svc/GetProgress", { guid: guid, noCache: Math.random() }, function (result) {
-
var per centum = result.d;
-
var percentageAsNumber = parseInt(percentage);
-
if (percentage == "-i") {
-
$("#progressMessage").text("Asking queued");
-
setTimeout(checkState, 1000);
-
return;
-
}
-
if (isNaN(percentageAsNumber)) {
-
window.localStorage.removeItem("currentGuid");
-
restartUI();
-
return;
-
}
-
if (percentageAsNumber != 101) {
-
$("#progressBar").progressbar({ value: percentageAsNumber });
-
$("#progressMessage").text("Rendering in progress…" + effect.d + "%");
-
setTimeout(checkState, grand);
-
}
-
else {
-
$("#renderInProgressDiv").slideUp("fast");
-
$("#final").slideDown("fast");
-
$("#imageLoadingMessage").slideDown("fast");
-
$.getJSON("RenderStatusService.svc/GetImageUrl", { guid: guid, noCache: Math.random() }, function (url) {
-
finalImage.src = url.d;
-
document.getElementById("imageHref").href = url.d;
-
});
-
window.localStorage.removeItem("currentGuid");
-
}
-
});
-
};
If the spider web service returns –1,the asking is queued. If the returned value is betwixt 0 and 100, we tin update the progress bar et if the value is –i, we can get and brandish the rendered picture.
Conclusion
As we tin see, Azure gives us all the required tools to develop and debug for the cloud.
I sincerely invite you lot to install the SDK to develop your own raytracer !
To get farther
Some useful links:
- https://blogs.msdn.com/b/windowsazure/
- https://world wide web.windowsazure.com/en-us/develop/downloads/
Silverlight five is out!
Information technology's a really large pleasance for me to denote that Silverlight 5 is finally available:
https://www.microsoft.com/silverlight/
Links
The Silverlight five Toolkit was also updated to support the RTM: https://silverlight.codeplex.com/releases/view/78435
And don't forget to have a look to my web log most all the new features of the toolkit: https://blogs.msdn.com/b/eternalcoding/archive/2011/12/ten/silverlight-toolkit-september-2011-for-silverlight-5-what-due south-new.aspx)
And of class, Babylon was updated for the RTM too: https://code.msdn.microsoft.com/Babylon-3D-engine-f0404ace
For all the downloads and the features listing, please get to: https://www.silverlight.net/learn/overview/what's-new-in-silverlight-v
Security and 3D
Beginning of all, please read this article: https://blogs.msdn.com/b/eternalcoding/annal/2011/x/eighteen/some-reasons-why-my-3d-is-not-working-with-silverlight-5.aspx
By the mode, yous may feel security errors with Silverlight v RTM when you lot want to employ the wonderful new 3D characteristic. In fact, some graphics drivers may allow malicious code to execute. That may lead to an unwanted hard reset or a blue screen.
Starting with the beta version, to protect users for this kind of trouble, we initiate a first scenario where all Windows XP Display Driver Model (XPDM) drivers on Windows XP, Windows Vista, and Windows vii will exist blocked by default. Permission will be granted automatically in elevated trust scenarios and Windows Display Driver Model (WDDM) drivers will non require user consent at run-time.
Simply as always, features, including security features, continue to be refined and added during post-beta development.
And for the RTM version, there were a number of approaches considered to further improve security and stability, just the solution to block 3D in partial trust by default was the best choice for this release. Permission is still granted automatically in elevated trust scenarios.
To grant 3D permissions, you simply take to correct click on your Silverlight plugin, get to the Permissions tab and permit your application:
Y'all tin of course help your users notice and understand this past using the following lawmaking in order to tailor an skilful user feel:
-
if (GraphicsDeviceManager.Current.RenderMode != RenderMode.Hardware)
-
{
-
switch (GraphicsDeviceManager.Current.RenderModeReason)
-
{
-
case RenderModeReason.GPUAccelerationDisabled:
-
throw new Exception(Strings.NoGPUAcceleration);
-
example RenderModeReason.SecurityBlocked:
-
throw new Exception(Strings.HardwareAccelerationBlockedBySecurityReason);
-
case RenderModeReason.Not3DCapable:
-
throw new Exception(Strings.HardwareAccelerationNotAvailable);
-
example RenderModeReason.TemporarilyUnavailable:
-
throw new Exception(Strings.HardwareAccelerationNotAvailable);
-
}
-
}
It is really of import to explain your users why the 3D is deactivated. As there is a potential security hole, it is their responsibility to allow 3D experience.
Back up and lifecyle
The support status for Silverlight is now updated for SL5:
https://back up.microsoft.com/gp/lifean45#sl5
Here is the extract for Silverlight 5:
"Silverlight v – Microsoft volition provide assisted and unassisted no charge support for customers using versions of Silverlight v. Paid support options are available to customers requiring support with issues beyond install and upgrade issues. Microsoft will continue to send updates to the Silverlight five runtime or Silverlight 5 SDK, including updates for security vulnerabilities equally determined by the MSRC. Developers using the Silverlight 5 development tools and developing applications for Silverlight five can use paid assisted-back up options to receive development support.
Silverlight five volition support the browser versions listed on this folio through 10/12/2021 , or though the support lifecycle of the underlying browsers, whichever is shorter. As browsers evolve, the support page volition be updated to reflect levels of compatibility with newer browser versions."
Silverlight Toolkit (December 2011) for Silverlight 5–What's new?
The new version of the Silverlight Toolkit (December 2011) for Silverlight 5 is out and yous tin grab it here:
https://silverlight.codeplex.com/releases/view/78435
Update: Babylon Engine now uses Silverlight v Toolkit: https://lawmaking.msdn.microsoft.com/Babylon-3D-engine-f0404ace
I had the pleasure of working on this version and I'm pleased to write this article to help you lot observe how the Toolkit enhances Silverlight 5 with the following features:
- Seamless integration of 3D models and other assets with the Content Pipeline
- New Visual Studio templates for creating:
- Silverlight 3D Application
- Silverlight 3D Library
- Silverlight Issue
- New samples to demo these features
Seamless integration with the Content Pipeline
The toolkit comes with a new assembly : Microsoft.Xna.Framework.Content.dll. This assembly allows you to load avails from the .xnb file format (produced by the Content Pipeline).
Using the new Visual Studio templates (which I will describe afterwards), y'all can now easily port existing 3D projects directly to Silverlight 5!
Microsoft.Xna.Framework.Content.dll assembly will add together the following classes to Silverlight 5:
- ContentManager
- Model
- SpriteFont and SpriteBatch
The toolkit comes also with the Microsoft.Xna.Framework.Tookit.dll assembly which will add the post-obit classes to Silverlight 5:
- SilverlightEffect
- Mouse, MouseState
- Keyboard, KeyboardState
ContentManager
The documentation for this class can be institute here:
https://msdn.microsoft.com/en-u.s.a./library/microsoft.xna.framework.content.contentmanager.aspx
The ContentManager class is the representative for the Content Pipeline within your lawmaking. It is responsible for loading objects from .xnb files.
To create a ContentManager you just have to phone call the following code:
-
ContentManager contentManager = new ContentManager(null, "Content");
There are restrictions for this grade : The ContentManager for Silverlight can only support one Content project and the RootDirectory must exist set to "Content"
Using information technology is actually simple considering it provides a elementary Load method which tin can be used to create your objects:
-
// Load fonts
-
hudFont = contentManager.Load<SpriteFont>("Fonts/Hud");
-
// Load overlay textures
-
winOverlay = contentManager.Load<Texture2D>("Overlays/you_win");
-
// Music
-
backgroundMusic = contentManager.Load<SoundEffect>("Sounds/Music");
Model
The documentation for this grade tin can be found here:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.model.aspx
The model class has the same API as in XNA 4 and information technology volition allow you to load and render 3D models from XNB files:
-
// Draw the model.
-
Model tankModel = content.Load<Model>("tank");
-
tankModel.Draw();
Y'all tin can also employ bones if your model supports them:
-
Model tankModel = content.Load<Model>("tank");
-
tankModel.Root.Transform = globe;
-
tankModel.CopyAbsoluteBoneTransformsTo(boneTransforms);
-
// Draw the model.
-
foreach (ModelMesh mesh in tankModel.Meshes)
-
{
-
foreach (BasicEffect effect in mesh.Furnishings)
-
{
-
effect.Earth = boneTransforms[mesh.ParentBone.Index];
-
event.View = view;
-
issue.Project = projection;
-
effect.EnableDefaultLighting();
-
}
-
mesh.Describe();
-
}
You tin import models using .x or .fbx format:
And thanks to the FBX importer, you can also import .3ds, .obj, .dxf and even Collada.
SpriteFont & SpriteBatch
The documentation for these classes tin can be found hither:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.spritebatch.aspx
https://msdn.microsoft.com/en-usa/library/microsoft.xna.framework.graphics.spritefont.aspx
The SpriteBatch course is used to display 2D textures on peak of the render. You can use them for displaying a UI or sprites.
-
SpriteBatch spriteBatch = new SpriteBatch(graphicsDevice);
-
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
-
spriteBatch.Draw(texture, new Rectangle(0, 0, width, superlative), Color.White);
-
spriteBatch.End();
Equally you can see, SpriteBatch only needs a texture to display.
SpriteFont allows you to use sprites to brandish text.
-
SpriteFont hudFont = contentManager.Load<SpriteFont>("Fonts/Hud");
-
spriteBatch.DrawString(hudFont, value, position + new Vector2(1.0f, 1.0f), Colour.Black);
-
spriteBatch.DrawString(hudFont, value, position, color);
SpriteFont relies on SpriteBatch to draw and needs a font definition from the ContentManager:
SilverlightEffect
The toolkit introduces a new class called SilverlightEffect which can exist used to use .fx files.
It also back up .slfx which is the default extension. There is no difference between .slfx and .fx but as XNA Effect Processor is already associated with .fx, the Silverlight Content Pipeline had to select another one.
You can now define a complete effect within a Content projection and employ it for rendering your models.
To exercise so:
- Create a .fx file with a to the lowest degree one technique
- Shader entry points must be parameterless
- Define return states
For example here is a uncomplicated .fx file:
-
float4x4 WorldViewProjection;
-
float4x4 World;
-
float3 LightPosition;
-
// Structs
-
struct VS_INPUT
-
{
-
float4 position : POSITION;
-
float3 normal : NORMAL;
-
float4 colour : COLOR0;
-
};
-
struct VS_OUTPUT
-
{
-
float4 position : POSITION;
-
float3 normalWorld : TEXCOORD0;
-
float3 positionWorld : TEXCOORD1;
-
float4 color : COLOR0;
-
};
-
// Vertex Shader
-
VS_OUTPUT mainVS(VS_INPUT In)
-
{
-
VS_OUTPUT Out = (VS_OUTPUT);
-
// Compute projected position
-
Out.position = mul(In.position, WorldViewProjection);
-
// Compute world normal
-
Out.normalWorld = mul(In.normal,(float3x3) WorldViewProjection);
-
// Compute world position
-
Out.positionWorld = (mul(In.position, World)).xyz;
-
// Transmit vertex color
-
Out.color = In.colour;
-
return Out;
-
}
-
// Pixel Shader
-
float4 mainPS(VS_OUTPUT In) : COLOR
-
{
-
// Light equation
-
float3 lightDirectionW = normalize(LightPosition – In.positionWorld);
-
float ndl = max(, dot(In.normalWorld, lightDirectionW));
-
// Final color
-
render float4(In.color.rgb * ndl, 1);
-
}
-
// Technique
-
technique MainTechnique
-
{
-
laissez passer P0
-
{
-
VertexShader = compile vs_2_0 mainVS(); // Must be a not-parameter entry point
-
PixelShader = compile ps_2_0 mainPS(); // Must be a non-parameter entry point
-
}
-
}
The Toolkit volition add required processors to the Content Pipeline in gild to create the .xnb file for this upshot:
To use this effect, y'all but have to instantiate a new SilverlightEffect inside your code:
-
mySilverlightEffect = scene.ContentManager.Load<SilverlightEffect>("CustomEffect");
Then, you tin can retrieve effect'south parameters:
-
worldViewProjectionParameter = mySilverlightEffect.Parameters["WorldViewProjection"];
-
worldParameter = mySilverlightEffect.Parameters["World"];
-
lightPositionParameter = mySilverlightEffect.Parameters["LightPosition"];
To render an object with your issue, it is the aforementioned code equally in XNA 4:
-
worldParameter.SetValue(Matrix.CreateTranslation(1, 1, 1));
-
worldViewProjectionParameter.SetValue(WorldViewProjection);
-
lightPositionParameter.SetValue(LightPosition);
-
foreach (var pass in mySilverlightEffect.CurrentTechnique.Passes)
-
{
-
// Apply pass
-
pass.Use();
-
// Set vertex buffer and index buffer
-
graphicsDevice.SetVertexBuffer(vertexBuffer);
-
graphicsDevice.Indices = indexBuffer;
-
// The shaders are already set so we tin draw primitives
-
graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, VerticesCount, 0, FaceCount);
-
}
Texture2D, TextureCube & SoundEffect
Silverlight five provides Texture2D, TextureCube and SoundEffect classes. With the Toolkit, you lot volition be able to load them from the ContentManager:
-
// Load overlay textures
-
winOverlay = contentManager.Load<Texture2D>("Overlays/you_win");
-
// Music
-
backgroundMusic = contentManager.Load<SoundEffect>("Sounds/Music");
Mouse and Keyboard
In society to facilitate porting existing 3D applications and to accommodate polling input application models, we also added partial support for Microsoft.Xna.Framework.Input namespace.
So you will be able to request MouseState and KeyboardState everywhere y'all want:
-
public MainPage()
-
{
-
InitializeComponent();
-
Mouse.RootControl = this;
-
Keyboard.RootControl = this;
-
}
However, there is a slight difference from original XNA on other endpoints: you have to register the root command which volition provide the events for Mouse and Keyboard. The MouseState positions will be relative to the upper left corner of this control:
-
individual void myDrawingSurface_Draw(object sender, DrawEventArgs eastward)
-
{
-
// Return scene
-
scene.Draw();
-
// Permit'southward go for another plow!
-
e.InvalidateSurface();
-
// Get mouse and keyboard state
-
MouseState mouseState = Mouse.GetState();
-
KeyboardState keyboardState = Keyboard.GetState();
-
…
-
}
The MouseState and KeyboardState are similar to XNA versions:
- https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.input.mousestate.aspx
- https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.input.keyboardstate.aspx
Extensibility
Silverlight Content Pipeline can be extended the same way equally the XNA Content Pipeline on other endpoints. You can provide your own implementation for loading avails from elsewhere than the embedded .xnb files.
For example you can write a class that volition stream .xnb from the network. To do then, y'all have to inherit from ContentManager and provide your own implementation for OpenStream:
-
public class MyContentManager : ContentManager
-
{
-
public MyContentManager() : base of operations(null)
-
{
-
}
-
protected override System.IO.Stream OpenStream(cord assetName)
-
{
-
render base of operations.OpenStream(assetName);
-
}
-
}
You can also provide our own type reader. Here is for example the custom type reader for SilverlightEffect:
-
///
-
/// Read SilverlightEffect.
-
///
-
public class SilverlightEffectReader : ContentTypeReader<SilverlightEffect>
-
{
-
///
-
/// Read and create a SilverlightEffect
-
///
-
protected override SilverlightEffect Read(ContentReader input, SilverlightEffect existingInstance)
-
{
-
int techniquesCount = input.ReadInt32();
-
EffectTechnique[] techniques = new EffectTechnique[techniquesCount];
-
for (int techniqueIndex = 0; techniqueIndex < techniquesCount; techniqueIndex++)
-
{
-
int passesCount = input.ReadInt32();
-
EffectPass[] passes = new EffectPass[passesCount];
-
for (int passIndex = 0; passIndex < passesCount; passIndex++)
-
{
-
cord passName = input.ReadString();
-
// Vertex shader
-
int vertexShaderByteCodeLength = input.ReadInt32();
-
byte[] vertexShaderByteCode = input.ReadBytes(vertexShaderByteCodeLength);
-
int vertexShaderParametersLength = input.ReadInt32();
-
byte[] vertexShaderParameters = input.ReadBytes(vertexShaderParametersLength);
-
// Pixel shader
-
int pixelShaderByteCodeLength = input.ReadInt32();
-
byte[] pixelShaderByteCode = input.ReadBytes(pixelShaderByteCodeLength);
-
int pixelShaderParametersLength = input.ReadInt32();
-
byte[] pixelShaderParameters = input.ReadBytes(pixelShaderParametersLength);
-
MemoryStream vertexShaderCodeStream = new MemoryStream(vertexShaderByteCode);
-
MemoryStream pixelShaderCodeStream = new MemoryStream(pixelShaderByteCode);
-
MemoryStream vertexShaderParametersStream = new MemoryStream(vertexShaderParameters);
-
MemoryStream pixelShaderParametersStream = new MemoryStream(pixelShaderParameters);
-
// Instanciate pass
-
SilverlightEffectPass currentPass = new SilverlightEffectPass(passName, GraphicsDeviceManager.Current.GraphicsDevice, vertexShaderCodeStream, pixelShaderCodeStream, vertexShaderParametersStream, pixelShaderParametersStream);
-
passes[passIndex] = currentPass;
-
vertexShaderCodeStream.Dispose();
-
pixelShaderCodeStream.Dispose();
-
vertexShaderParametersStream.Dispose();
-
pixelShaderParametersStream.Dispose();
-
// Render states
-
int renderStatesCount = input.ReadInt32();
-
for (int renderStateIndex = 0; renderStateIndex < renderStatesCount; renderStateIndex++)
-
{
-
currentPass.AppendState(input.ReadString(), input.ReadString());
-
}
-
}
-
// Instanciate technique
-
techniques[techniqueIndex] = new EffectTechnique(passes);
-
}
-
return new SilverlightEffect(techniques);
-
}
-
}
New Visual Studio templates
The toolkit will install two new project templates and a new item template:
Silverlight3DApp
This template will produce a total working Silverlight 3D application.
The new solution will be composed of 4 projects:
- Silverlight3DApp : The primary project
- Silverlight3DAppContent : The content project attached with the master project
- Silverlight3DWeb : The web site that volition brandish the main project
- Silverlight3DWebContent : A content project attached to the website if you desire to stream your .xnb from the website instead of using embedded ones. This will permit you distribute a smaller .xap.
The master project (Silverlight3DApp) is built around two objects:
- A sceneobject which
- Create the ContentManager
- Handle the DrawingSurface Describe event
- A cubeobject
- Create a vertex buffer and index buffer
- Use the ContentManager to retrieve a SilverlightEffect (Customeffect.slfx) from the content project
- Configure and use the SilverlightEffect to return
Silverlight3DLib
This template will produce a Silverlight Library without whatsoever content but with all Microsoft.Xna.Framework references set:
And the resulting projection will look like:
SilverlightEffect
This item template can exist used within a Content project to add a custom .slfx file that will work with SilverlightEffect class:
The file content will be the following:
-
float4x4 Globe;
-
float4x4 View;
-
float4x4 Project;
-
// TODO: add effect parameters here.
-
struct VertexShaderInput
-
{
-
float4 Position : POSITION0;
-
// TODO: add input channels such equally texture
-
// coordinates and vertex colors here.
-
};
-
struct VertexShaderOutput
-
{
-
float4 Position : POSITION0;
-
// TODO: add vertex shader outputs such as colors and texture
-
// coordinates here. These values will automatically be interpolated
-
// over the triangle, and provided as input to your pixel shader.
-
};
-
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
-
{
-
VertexShaderOutput output;
-
float4 worldPosition = mul(input.Position, World);
-
float4 viewPosition = mul(worldPosition, View);
-
output.Position = mul(viewPosition, Projection);
-
// TODO: add your vertex shader code hither.
-
render output;
-
}
-
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
-
{
-
// TODO: add your pixel shader code here.
-
return float4(1, , , 1);
-
}
-
technique Technique1
-
{
-
pass Pass1
-
{
-
// TODO: set renderstates here.
-
VertexShader = compile vs_2_0 VertexShaderFunction();
-
PixelShader = compile ps_2_0 PixelShaderFunction();
-
}
-
}
New samples to demo these features
Finally, to aid you lot discover and learn all these features, nosotros added some cools samples:
Bloom
This sample shows yous how to use sprites to accomplish postal service-processing furnishings such as "bloom". It likewise uses the Content Pipeline to import a tank model from a .fbx file.
CustomModelEffect
This sample shows y'all how custom furnishings tin can be applied to a model using the Content Pipeline.
Generated geometry
This sample shows how 3D models can be generated by lawmaking during the Content Pipeline build process.
Particles
This sample introduces the concept of a particle organisation, and shows how to depict particle furnishings using SpriteBatch. Two particle effects are demonstrated: an explosion and a ascent plume of smoke:
Primitives3D
This sample provides hands reusable lawmaking for drawing basic geometric primitives:
Platformer
This sample is a complete game with 3 levels provided (you lot can easily add yours). It shows the usage of SpriteBatch, SpriteFont and SoundEffect within a platform game. Information technology also uses Keyboard course to control the player.
SimpleAnimation
This sample shows how to apply program controlled rigid body animation to a 3D model loaded with the ContentManager:
Skinning
This sample shows how to procedure and render a skinned graphic symbol model using the Content Pipeline.
Conclusion
Equally you noticed, all these new additions to the Silverlight Toolkit are made to brand it easy to go started with new Silverlight 3D features by providing developer tools to better usability and productivity.
You tin at present easily starting time a new project that leverages both concepts of XNA and Silverlight. Information technology becomes like shooting fish in a barrel to work with 3D concepts and resource like shaders, model, sprites, furnishings, etc…
Nosotros also attempt to reduce the effort to port existing 3D applications to Silverlight.
So at present information technology's up to you to discover the wonderful world of 3D using Silverlight 5!
Silverlight 5 Toolkit Compile error :-2147024770 (0, 0): error : Unknown compile mistake (check flags confronting DX version)
(Woow what a funny title!)
Some Silverlight five Toolkit users transport me mails nearly this error message:
Error 1 Compile error -2147024770
(0, 0): error : Unknown compile error (check flags confronting DX version) (myfile.slfx)
To correct the problem, you just have to install the latest DirectX Runtime:
https://www.microsoft.com/download/en/details.aspx?id=8109
This error is generated by the Silverlight consequence file compiler which wants to use the DirectX Effect compiler and don't manage to locate it.
The DirectX Effect compiler is located in a library called d3dx9_xx.dll. The "_xx_" part may vary according to the version of the DirectX SDK used (in the instance of Silverlight five Toolkit, we used the June 2010 version which relies on d3dx9_43.dll).
The trouble is that this library is not installed past default on Windows (but a lot of applications installed information technology). That's why you may be required to installed the DirectX Runtime which comes with all versions of the library.
Kinect for Windows beta 2 is out
The new site for Kinect for Windows and the new beta of the SDK are out!
This new version focuses on stability and operation:
- Faster and improved skeletal tracking
- Condition change support
- Improved articulation tracking
- 64-flake support
- Audio can be used from UI thread
Nosotros also announce the release date of the commercial version for early on 2012.
Faster and improved skeletal tracking
With updates to the multi-core exemplar, Kinect for Windows is now twenty% faster than information technology was in the last release (beta one refresh). Also, the accuracy charge per unit of skeletal tracking and articulation recognition and been substantially improved.
When using two Kinects, you can at present specify which 1 is used for skeletal tracking.
Condition change back up
You lot can at present plug and unplug your Kinect without losing work. API support for detecting and managing device status changes, such as device unplugged, device plugged in, power unplugged, etc. Apps can reconnect to the Kinect device after it is plugged in, after the computer returns from suspend, etc..
Improved joint tracking
Substantially improve the accuracy rate of articulation recognition and tracking.
64-flake back up
The SDK tin can be used to build 64-bit applications. Previously, only 32-bit applications could exist congenital.
Sound tin can exist used from UI thread
Developers using the audio inside WPF no longer need to access the DMO from a separate thread. Y'all can create the KinectAudioSource on the UI thread and simplify your code.
Additional information
Furthermore, this new version now supports Windows "eight" (Desktop side)
The new site and the sdk can be establish hither:
https://www.kinectforwindows.org
To employ this new version you only demand to recompile your lawmaking as no breaking changes were introduced.
The Kinect Toolbox was obviously updated to support the new SDK:
https://kinecttoolbox.codeplex.com/
The nuGet bundle can be found hither:
https://nuget.org/List/Packages/KinectToolbox
Some reasons why my 3D is non working with Silverlight 5
The aim of this post is to requite you some tricks to enable 3D experiences with Silverlight five for your applications.
Simply first of all, let's see how you lot tin can activate accelerated 3D support inside a Silverlight 5 project.
Standard style
To actuate accelerated 3D support, the host of Silverlight must activate it using a param named "enableGPUAcceleration":
Past doing this, you will allow Silverlight to employ the graphic carte's power to render XAML elements. And at the aforementioned fourth dimension, you will actuate the accelerated 3D support.
Troubleshooting
You tin detect if 3D is activated or not in your code through the GraphicsDeviceManager class:
-
// Check if GPU is on
-
if (GraphicsDeviceManager.Current.RenderMode != RenderMode.Hardware)
-
{
-
MessageBox.Show("Please activate enableGPUAcceleration=true on your Silverlight plugin page.", "Warning", MessageBoxButton.OK);
-
}
The RenderMode property will be set to Unavailable when 3D is not activated.
In this case, the belongings GraphicsDeviceManager.Current.RenderModeReason will be fix to one of these values:
- Not3DCapable
- GPUAccelerationDisabled
- SecurityBlocked
- TemporarilyUnavailable
Not3DCapable
You volition get this reason when your graphic card is likewise former to support 3D required features such equally shader model two.0.
GPUAccelerationDisabled
You forgot to set enableGPUAcceleration to true in the hosting HTML page.
SecurityBlocked
When you launch your application, a specific domain entry for 3D can be set to the Permissions tab of the Silverlight Configuration panel.
You will not run across this entry unless your domain group policy set it (and in this case you take to ask permissions to your administrator) or yous run your Silverlight application under Windows XP.
When you run a Silverlight awarding nether Windows XP, the following behavior happen:
- 3D is enabled automatically in elevated trust (out of browser)
- The starting time fourth dimension a user runs a non-elevated 3D application, a domain entry set to Deny is added to the Permissions tab of the Silverlight Configuration panel
To use a 3D application under Windows XP in a non-elevated context, you lot must change the pre-created Deny entry to Let.
TemporarilyUnavailable
This happen when the device is lost (for case under lock screen on Windows XP. It doesn't happen much in WDDM) where Silverlight expects the rendering surface to return at some signal.
Additional tips
One other of import indicate to know is that 3D won't work correctly inside windowless mode. In this case, the draw event will be driven from the UI thread and so will be fired simply during UI events such every bit page coil for case.
Decision
I hope this mail service was useful to help you utilise the wonderful accelerated 3D experience of Silverlight 5.
As you can see, the better solution to handle 3D support is to utilise GraphicsDeviceManager.Current.RenderModeReason.
Useful links
- Silverlight 5 downloads
- Silverlight 5 Toolkit
- Silverlight five RC–What's new in the 3D earth-
- Silverlight Toolkit (September 2011) for Silverlight 5–What'due south new-
- New version of Babylon engine for Silverlight five and Silverlight v Toolkit
New version of Babylon engine for Silverlight 5 and Silverlight 5 Toolkit
I've only updated the source of Babylon engine. You can grab the bits hither:
https://lawmaking.msdn.microsoft.com/silverlight/Babylon-3D-engine-f0404ace
This new version uses the new content pipeline of the toolkit and is compiled using Silverlight v RC.
You can play with the exposed shaders using the new SilverlightEffect grade.
It is now time to unleash the power of accelerated 3D!!
DOWNLOAD HERE
Posted by: gutierrezbeinale64.blogspot.com
Post a Comment