Skip to content Skip to sidebar Skip to footer

How to Draw Array in Autocad 2012

Rectangular Array

Polar, rectangular, and path arrays

Elliot J. Gindis , Robert C. Kaebisch , in Up and Running with AutoCAD® 2020, 2020

Legacy rectangular array (Pre-AutoCAD 2012)

The legacy rectangular array is accessed via the same methods as the legacy polar array. Do not forget that, for the latest version of AutoCAD, you can still access this dialog box by typing in arrayclassic. When the dialog box appears, make sure the Rectangular array radio button is selected and you see what is shown in Fig. 8.17 (also taken from AutoCAD 2011). The procedure remains basically the same. You must select the object, then enter the number of rows and columns, followed by the distances between those rows and columns. Do a preview and press OK.

Figure 8.17. Array Classic dialog box.

Note again that the entire array is not a block, just a collection of objects; and to redo the whole thing, you need to erase all the copies, leaving just one I-beam column and repeat everything. This array, of course, has none of the editing options introduced in AutoCAD 2012 either.

In general, polar arrays are used more often than the rectangular ones, but both are important to know.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128198629000087

Vectors and Matrices

Frank E. Harris , in Mathematics for Physical Science and Engineering, 2014

Matrices: Definition and Operations

We define a matrix to be a rectangular array of numbers (called its elements) for which various operations are defined. The elements are arranged in horizontal rows and vertical columns; if a matrix has m rows and n columns, it is referred to as an m × n matrix. In spoken language, an m × n matrix is usually called an "m by n matrix." If the number of rows is equal to the number of columns, a matrix is identified as square. In this book a matrix will usually be denoted by an upper-case character in a sans-serif font; an example is A. The elements of matrix A are often referred to as A ij or a ij ; both conventions are in common use, and both mean the element in row i and column j . Notice that the row index is always given first. Note also that if i j , A ij and A ji refer to different positions in the array and there is in general no reason to expect that A ij = A ji . For later use, we note that elements A ij with i = j (hence A ii ) are called the diagonal elements of the matrix, and a line through these diagonal elements is called the principal diagonal. The array of numbers (or symbols) constituting a matrix is conventionally enclosed in ordinary parentheses, not curly braces { } or vertical lines . Matrix algebra is defined in a way such that a matrix with only one column is synonymous with a column vector; matrices with only one row are called row vectors. Some matrices are exhibited in Fig. 4.8. It is important to realize that (like a vector) a matrix is not a single number and cannot be reduced to a single number; it is an array.

Figure 4.8. From left to right, matrices of dimension 4 × 1 (column vector), 3 × 2 , 2 × 3 , 2 × 2 (square), 1 × 2 (row vector).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128010006000043

The Digital Representation of Visual Information

RON BRINKMANN , in The Art and Science of Digital Compositing (Second Edition), 2008

Pixels, Components, and Channels

Digital images are stored as an array of individual dots, or pixels. Each pixel will have a certain amount of information associated with it, and although it is still common to present a pixel as having a specific color, in reality there may be a great deal of additional information that is associated with each pixel in an image. But rather than taking the typical computer-graphics route of discussing all the possible characteristics of a pixel first and then looking at how these combine to make an image, we want to think of a color image as a layered collection of simpler images, or channels.

Digital images are, of course, a rectangular array of pixels. And each pixel does, of course, have a characteristic color associated with it. But the color of a pixel is actually a function of three specific components of that pixel: the red, green, and blue (usually simplified to R, G, and B) components. 1 By using a combination of these three primary colors at different intensities, we can represent the full range of the visible spectrum for each pixel.

If we look at a single component (red, let's say) of every pixel in an image and view that as a whole image, we have what is known as a specific channel of the complete color image. Thus, instead of referring to an image as a collection of colored pixels, we can think of it as a three-layer combination of primary-colored channels.

Consider an image such as that shown in Figure 3.1. The next three images, Figure 3.2a, 3.2b, and 3.2c, show the red channel, the green channel, and the blue channel of this sample image.

Figure 3.1. A sample color image.

Figure 3.2a. The red channel from our sample image.

Figure 3.2b. The green channel.

Figure 3.2c. The blue channel.

In this case, we've tinted the individual channels to reflect the color they are associated with, and it is convenient to think of the channels as being transparent slides that can be layered (or projected) together to result in a full-color image. But these channels can also be thought of as monochrome images in their own right. Consider Figure 3.3, which is the same image as Figure 3.2a, the red channel, but without any colored tint applied. This monochrome representation is really more accurate, in the sense that a channel has no inherent color, and could conceivably be used for any channel in an image.

Figure 3.3. The red channel from our sample image, shown as a grayscale image.

The reason for dealing with images in this fashion is twofold. First, single pixels are simply too small a unit to deal with individually; in general, compositing artists spend about as much time worrying about individual pixels as a painter might spend worrying about an individual bristle on her paintbrush. More importantly, dealing with complete channels gives a great deal of additional control and allows for the use of techniques that were pioneered in the days when optical compositing was being developed. Color film actually consists of three different emulsion layers, each sensitive to either red, green, or blue light, and it became useful to photographically separate these three layers, or records, in order to manipulate them individually or in combination. A digital image (also known as a bit-mapped image) of this type will generally consist of these three color channels integrated into a single image, but it should be clear that these three channels can be thought of as separate entities that can be manipulated separately as well.

Looking at the individual channels for the image in question, we can see which areas contain large values of certain colors, and it is easy to find the correspondence in an individual channel. For instance, the background of the image—the area behind the parrot—is a fairly pure blue. If you look at the red channel of this image (Figure 3.2a again), it should be obvious that there is essentially no red content in this area. On the other hand, the front of the head and the beak are areas with a good deal of red in them (yellow having heavy red and green values), which is also obvious when looking at the individual channels. Later we will see how manipulating and combining the individual channels of an image can be used for everything from color correction to matte extraction.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123706386000031

Permeability of porous media

Piotr DrygaÅ› , ... Wojciech Nawalaniec , in Applied Analysis of Composite Media, 2020

7.2.3 Longitudinal permeability. Square array

We study the longitudinal permeability of a spatially periodic rectangular array of circular cylinders, when a Newtonian fluid is flowing at low Reynolds number along the cylinders. Longitudinal laminar flow between unidirectional cylinders is governed by the two-dimensional Poisson equation ( Adler, 1992)

(7.10) 2 w = 1 ,

where w is the component of velocity which is parallel to cylinders; viscosity μ and the pressure gradient are taken equal to 1. In general c and w satisfy a classical boundary condition (Dirichlet, Neumann and their generalizations); velocity vanishes on ∂D, the boundary of a domain D, where Eqs. (7.10) are fulfilled. Poisson equation can be transformed into a functional equation. This equation can be solved by the method of successive approximations. The major advantage of this technique is that the permeability of the array can be expressed analytically in terms of the radius of the cylinders and of the aspect ratio of the unit cell. The unit cell is a rectangle which contains a single circular disc, which is the cross-section of a cylinder. The effective permeability K | | (Adler, 1992) is defined as the double integral of the flow velocity over the unit cell

(7.11) K | | = D w ( x 1 , x 2 ) d x 1 d x 2 .

Series for the longitudinal permeability of the regular square lattice array of cylinders was calculated by Mityushev and Adler (2002a)

(7.12) K | | ( f ) = 1 4 π [ log ( f ) 1.47644 + 2 f 0.5 f 2 0.0509713 f 4 + 0.077465 f 8 0.109757 f 12 + 0.122794 f 16 0.146135 f 20 + 0.244536 f 24 0.322667 f 28 + 0.310566 f 32 0.541237 f 36 + 0.820399 f 40 ] + O ( f 41 ) ,

where f = π r 2 is the area fraction of the cylinders. In the case being studied, application of the Padé approximants to (7.12), (while leaving the log-term outside), does not give any significant improvement. Such behavior is to be expected for the convergent series within the region of their convergence and with sufficiently large number of terms preserved after truncation. Since the parallel flow solutions are idealized solutions for the flow through cigarette filters, the series (7.12) has certain practical value.

In order to explain the methodological difference between our analytical approach and various simplifications, consider the following expression for the non-dimensional permeability for parallel flow through square array of cylinders from Tamayol and Bahrami (2010a),

(7.13) K | | ( f ) = K | | ( 2 r ) 2 = 0.0186 f 4 f 2 2 + 2 f log ( f ) 1.479 16 f .

The series (7.12) by itself provides better accuracy than (7.13), compared with numerical results of Sangani and Yao (1988). The percentage error given by the series (7.12) equals to 0.193% as f = 0.7 , while formula (7.13) gives the error of 7.177%. Longitudinal permeability remains finite at f c . The seepage at f = f c predicted by the series (7.12) is significantly, by 17.9% smaller than prediction of (7.13). The paper (Tamayol and Bahrami, 2010a) is based on the intuitive approximation of the considered array by a 1D array. Unfortunately, a disagreement between (7.12) and (7.13) is frequently explained by engineers as a difference in basic modeling. Due to the principle of democracy, both models have equal rights. However, here we are in the framework of the properly stated mathematical model when the permeability is uniquely determined. The model is already fixed and is the same in both cases. Actually we discuss not a model but a method of solution. The comparison of (7.12) and (7.13) leads us to conclusion that the term f 4 in the formula (7.13) is wrong.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081026700000160

Polar, rectangular, and path arrays

Elliot J. Gindis , Robert C. Kaebisch , in Up and Running with AutoCAD® 2022, 2022

Steps in creating a rectangular array

The first thing to do is to draw a convincing-looking I-beam. Because we need actual distances, you cannot just randomly draw any object; use known sizing this time around. With one of the columns in Fig. 8.11 as a model, create a 20″ wide by 26″ tall rectangle. Inside of it, put an I-beam drawn with basic linework and a solid hatch fill, and shade the background a light gray (Fig. 8.12). Finally, make a block out of it. That sets up the column for an array.

Figure 8.12. I-beam for column.

Now, what does AutoCAD need to know to create a rectangular array? The following three items are critical to a rectangular array; although, note that only the first item is explicitly asked for by AutoCAD as you create the array:

It needs to know what object(s) to array (the I-beam column).

It needs to know how many rows and columns to create (rows   =   across, columns   =   up and down).

It needs to know the distance between the rows and the columns (from centerline to centerline).

Step 1. Start up the array command via any of the preceding methods (it is recommended to have the Ribbon turned on).

AutoCAD says: Select objects:

Step 2. Select the I-beam column block.

AutoCAD says: 1found

Step 3. Press Enter. A small array of items is created, with grips visible.

AutoCAD says: Type   =   Rectangular Associative   =   Yes

Select grip to edit array or [ASsociative/Base point/COUnt/Spacing/COLumns/Rows/Levels/eXit]<eXit>:

Step 4. You have now successfully created the basic rectangular array. You still need to define the exact number of rows and columns, however, as well as the distance between them, as this current array is rather arbitrary. You can do this either via command line, dynamically (using grips), or via the Ribbon. Fig. 8.14 shows how you can pull the items diagonally in a dynamic fashion using the center grip. You can also do this via the outer arrow grips. This dynamic method is not appropriate for setting precise distances between the columns, which are random values for now, but it does make creating the pattern very easy.

Figure 8.13. Creating a rectangular array dynamically.

Figure 8.14. Rectangular array editing.

Step 5. If you have the Ribbon up, then you just enter values. We want to create a 5   ×   5 array, so start by entering those values into the Ribbon's Columns: and Rows: fields. For distances between the columns and rows use the value 72.

Step 6. If using the command line, you just must follow the prompts for Columns, Rows, and Spacing. Regardless of which you start with (Rows or Columns), AutoCAD asks you for the spacing as needed. Press Enter when done. Your rectangular array should look like Fig. 8.13.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780323899239000081

Digital electronics

Martin Plonus , in Electronics and Communications for Scientists and Engineers (Second Edition), 2020

7.6.4 Coincident decoding

In large RAM modules, the memory cells are arranged in huge rectangular arrays. Linear addressing, described above, activates a single word-select line and can become unwieldy in huge arrays, necessitating very long addresses. In linear addressing, a decoder with k inputs and 2 k outputs requires 2 k AND gates with k inputs per gate. The total number of AND gates can be reduced by employing a two-part addressing scheme in which the X address and the Y address of the rectangular array are given separately. Two decoders are used, one performing the X selection and the other the Y selection in the two-dimensional array. The intersection (coincidence) of the X and Y lines identifies and selects one cell in the array. The only difference is that another Select line is needed in the cell structure of Fig. 7.32, which is easily implemented by changing the three AND gates to quadruple-input AND gates.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128170083000073

Digital Images and Image Manipulation

TOM McREYNOLDS , DAVID BLYTHE , in Advanced Graphics Programming Using OpenGL, 2005

4.1 Image Representation

The output of the rendering process is a digital image stored as a rectangular array of pixels in the color buffer. These pixels may be displayed on a CRT or LCD display device, copied to application memory to be stored or further manipulated, or re-used as a texture map in another rendering task. Each pixel value may be a single scalar component, or a vector containing a separate scalar value for each color component.

Details on how a geometric primitive is converted to pixels are given in Chapter 6; for now assume that each pixel accurately represents the average color value of the geometric primitives that cover it. The process of converting a continuous function into a series of discrete values is called sampling. A geometric primitive, projected into 2D, can be thought of as defining a continuous function of its spatial coordinates x and y.

For example, a triangle can be represented by a function fcontinuous (x, y). It returns the color of the triangle when evaluated within the triangle's extent, then drops to zero if evaluated outside of the triangle. Note that an ideal function has an abrupt change of value at the triangle boundaries. This instantaneous drop-off is what leads to problems when representing geometry as a sampled image. The output of the function isn't limited to a color; it can be any of the primitive attributes: intensity (color), depth, or texture coordinates; these values may also vary across the primitive. To avoid overcomplicating matters, we can limit the discussion to intensity values without losing any generality.

A straightforward approach to sampling the geometric function is to evaluate the function at the center of each pixel in window coordinates. The result of this process is a pixel image; a rectangular array of intensity samples taken uniformly across the projected geometry, with the sample grid aligned to the x and y axes. The number of samples per unit length in each direction defines the sample rate.

When the pixel values are used to display the image, a reproduction of the original function is reconstructed from the set of sample values. The reconstruction process produces a new continuous function. The reconstruction function may vary in complexity; for example, it may simply repeat the sample value across the sample period

f r e c o n s t r u c t e d ( x , y ) = p i x e l [ x ] [ y ]

or compute a weighted sum of pixel values that bracket the reconstruction point. Figure 4.1 shows an example of image reconstruction.

Figure 4.1. Example of image reconstruction.

When displaying a graphics image, the reconstruction phase is often implicit; the reconstruction is part of the video display circuitry and the physics of the pixel display. For example, in a CRT display, the display circuitry uses each pixel intensity value to adjust the intensity of the electron beam striking a set of phosphors on the screen. This reconstruction function is complex, involving not only properties of the video circuitry, but also the shape, pattern, and physics of the phosphor on the screen. The accuracy of a reconstructed triangle may depend on the alignment of phosphors to pixels, how abruptly the electron beam can change intensity, the linearity of the analog control circuitry, and the design of the digital to analog circuitry. Each type of output device has a different reconstruction process. However, the objective is always the same, to faithfully reproduce the original image from a set of samples.

The fidelity of the reproduction is a critical aspect of using digital images. A fundamental concern of sampling is ensuring that there are enough samples to accurately reproduce the desired function. The problem is that a set of discrete sample points cannot capture arbitrarily complicated detail, even if we use the most sophisticated reconstruction function. This is illustrated by considering an intensity function that has the similar values at two sample points P 1 and P 3, but between these points P 2 the intensity varies significantly, as shown in Figure 4.2. The result is that the reconstructed function doesn't reproduce the original function very well. Using too few sample points is called undersampling; the effects on a rendered image can be severe, so it is useful to understand the issue in more detail.

Figure 4.2. Undersampling: Intensity varies wildly between sample points P 1 and P 2.

To understand sampling, it helps to rely on some signal processing theory, in particular, Fourier analysis (Heidrich and Seidel, 1998; Gonzalez and Wintz, 1987). In signal processing, the continuous intensity function is called a signal. This signal is traditionally represented in the spatial domain as a function of spatial coordinates. Fourier analysis states that the signal can be equivalently represented as a weighted sum of sine waves of different frequencies and phase offsets. This is a bit of an oversimplification, but it doesn't affect the result. The corresponding frequency domain representation of a signal describes the magnitude and phase offset of each sine wave component. The frequency domain representation describes the spectral composition of the signal.

The sine wave decomposition and frequency domain representation are tools that help simplify the characterization of the sampling process. From sine wave decomposition, it becomes clear that the number of samples required to reproduce a sine wave must be twice its frequency, assuming ideal reconstruction. This requirement is called the Nyquist limit. Generalizing from this result, to accurately reconstruct a signal, the sample rate must be at least twice the rate of the maximum frequency in the original signal. Reconstructing an undersampled sine wave results in a different sine wave of a lower frequency. This low-frequency version is called an alias. An aliased signal stands in for the original, since at the lower sampling frequency, the original signal and its aliases are indistinguishable. Aliased signals in digital images give rise to the familiar artifacts of jaggies, or staircasing at object boundaries. Techniques for avoiding aliasing artifacts during rasterization are described in Chapter 10.

Frequency domain analysis also points to a technique for building a reconstruction function. The desired function can be found by converting its frequency domain representation to one in the spatial domain. In the frequency domain, the ideal function is straightforward; the function that captures the frequency spectrum of the original image is a comb function. Each "tooth" of the comb encloses the frequencies in the original spectrum; in the interests of simplicity, the comb is usually replaced with a single "wide tooth" or box that encloses all of the original frequencies (Figure 4.3). Converting this box function to the spatial domain results in the sinc function. Signal processing theory provides a framework for evaluating the fidelity of sampling and reconstruction in both the spatial and frequency domain. Often it is more useful to look at the frequency domain analysis since it determines how individual spectral components (frequencies) are affected by the reconstruction function.

Figure 4.3. Ideal reconstruction function.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558606593500068

Generate Statements

Peter J. Ashenden , ... Darrell A. Teegarden , in The System Designer's Guide to VHDL-AMS, 2003

Many digital systems can be implemented as regular iterative compositions of subsystems. Memories are a good example, being composed of a rectangular array of storage cells. Indeed, VLSI designers prefer to find such implementations, as they make it easier to produce a compact, area-efficient layout, thus reducing cost. If a design can be expressed as a repetition of some subsystem, we should be able to describe the subsystem once, then describe how it is to be repeatedly instantiated, rather than describe each instantiation individually. In this chapter, we look at the VHDL-AMS facility that allows us to generate such regular structures.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558607491500177

MPEG-1 and -2 Compression

TOM LOOKABAUGH , in Multimedia Communications, 2001

7.3.1.1 Representation of Video

The video that MPEG expects to process is composed of a sequence of frames or fields of luma and chroma.

Frame-Based Representation MPEG-1 is restricted to representing video as a sequence of frames. Each frame consists of three rectangular arrays of pixels, one for the luma (Y, black and white) component, and one each for the chroma (Cr and Cb, color difference) components. The luma and chroma definitions are taken from the CCIR-601 standard for representation of uncompressed digital video.

The chroma arrays in MPEG-1 are subsampled by a factor of two both vertically and horizontally relative to the luma array. While MPEG does not specify exactly how the subsampling is to be performed it does make clear that the decoder will assume subsampling was designed so as to spatially locate the subsampled pixels according to Figure 7.2 and will perform its interpolation of chroma samples accordingly.

FIGURE 7.2. Relationship between luma and chroma subsampling for MPEG-1.

Typically, MPEG-2 expects chroma subsampling to be consistent with CCIR-601 prescribed horizontal subsampling. Spatially, this implies the chroma subsampling pattern shown in Figure 7.3, termed 4:2:0 sampling.

FIGURE 7.3. Relationship between luma and chroma subsampling for MPEG-2.

Field-Based Representation MPEG-2 is optimized for a wider class of video representations, including, most importantly, field-based sequences. Fields are created by dividing each frame into a set of two interlaced fields, with odd lines from the frame belonging to one field and even lines to the other. The fields are transmitted in interlaced video one after the other, separated by half a frame time. Interlacing of video is in fact a simple form of compression by subsampling. It exploits the fact that the human visual system is least sensitive to scene content that has both high spatial and temporal frequencies (such as a fast-moving item with much detail). An interlaced source cannot represent such scenes effectively, but can build up the full detail of a frame (within two field times) and can also update low-resolution items that are changing every field time; these latter types of material are the most visually important.

For field-based sequences, MPEG-2 expects the chroma associated with each field to be vertically subsampled within the field, yet maintain an expected alignment consistent with frame-based sequences. This leads to a vertical resampling pattern as shown in Figure 7.4.

FIGURE 7.4. Relationship between luma and chroma samples vertically and in time for field-based video material.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122821608500086

Imaging Optics

Matt Young , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

III.B Digital Cameras

These replace the film with a digital receptor called a CCD array . A CCD (for charge-coupled device) array is a rectangular array of photosensitive elements or pixels. A typical array might contain 1300   ×   1000 pixels, which require approximately 1.3   MB of digital memory. A 24   ×   36-mm color slide, for comparison, might produce about 60 lines/mm, which translates to 120 pixels/mm, or 4000   ×   3000 pixels. The slide has roughly three times the resolution of the CCD array, but to duplicate the slide digitally would require about an order of magnitude (32) more memory.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105003288

griffithhielf1963.blogspot.com

Source: https://www.sciencedirect.com/topics/computer-science/rectangular-array

Post a Comment for "How to Draw Array in Autocad 2012"